Diagnostic Reference Levels

Cynthia H. McCollough, PhD Mayo Clinic, Rochester, MN


Diagnostic reference levels were first mentioned by the International Commission on Radiological Protection (ICRP) in 19901 and subsequently recommended in greater detail in 19962. From the 1996 report:

Diagnostic reference levels are supplements to professional judgment and do not provide a dividing line between good and bad medicine. It is inappropriate to use them for regulatory or commercial purposes. Diagnostic reference levels apply to medical exposure, not to occupational and public exposure. Thus, they have no link to dose limits or constraints. Ideally, they should be the result of a generic optimization of protection. In practice, this is unrealistically difficult and it is simpler to choose the initial values as a percentile point on the observed distribution of doses to patients. The values should be selected by professional medical bodies and reviewed at intervals that represent a compromise between the necessary stability and the long-term changes in the observed dose distributions. The selected values will be specific to a country or region.

Diagnostic reference levels are not the suggested or ideal dose for a particular procedure or an absolute upper limit for dose. Rather, they represent the dose level at which an investigation of the appropriateness of the dose should be initiated. In conjunction with an image quality assessment, a qualified medical physicist should work with the radiologist and technologist to determine whether or not the required level of image quality could be attained at lower dose levels. Thus, reference levels act as “trigger levels” to initiate quality improvement. Their primary value is to identify dose levels that may be unnecessarily high – that is, to identify those situations where it may be possible to reduce dose without compromising the required level of image quality.

Use of Diagnostic Reference Levels to Reduce Patient Dose

The use of diagnostic reference levels as an important dose optimization tool is endorsed by many professional and regulatory organizations, including the ICRP, American College of Radiology (ACR), American Association of Physicists in Medicine (AAPM), United Kingdom (U.K.) Health Protection Agency, International Atomic Energy Agency (IAEA), and European Commission (EC). Reference levels are typically set at the 75th percentile of the dose distribution from a survey conducted across a broad user base (i.e., large and small facilities, public and private, hospital and out-patient) using a specified dose measurement protocol and phantom. They are established both regionally and nationally, and considerable variations have been seen across both regions and countries3. Dose surveys should be repeated periodically to establish new reference levels, which can demonstrate changes in both the mean and standard deviation of the dose distribution.

The use of diagnostic reference levels has been shown to reduce the overall dose and the range of doses observed in clinical practice. For example, U.K. national dose surveys demonstrated a 30% decrease in typical radiographic doses from 1984 to 1995 and an average drop of about 50% between 1985 and 20004,5. While improvements in equipment dose efficiency may be reflected in these dose reductions, investigations triggered when a reference dose is exceeded can often determine dose reduction strategies that do not negatively impact the overall quality of the specific diagnostic exam. Thus, data points above the 75th percentile are, over time, moved below the 75th percentile – with the net effect of a narrower dose distribution and a lower mean dose.

CT Diagnostic Reference Levels From Other Countries

Diagnostic reference levels must be defined in terms of an easily and reproducibly measured dose metric using technique parameters that reflect those used in a site’s clinical practice. In radiographic and fluoroscopic imaging, typically measured quantities are entrance skin dose for radiography and dose area product for fluoroscopy. Dose can be measured directly with TLD or derived from exposure measurements. Some authors survey typical technique factors and model the dose metric of interest.

In CT, published diagnostic reference levels use CTDI-based metrics such as CTDIw, CTDIvol, and DLP. Normalized CTDI values (CTDI per mAs) can be used by multiplying them by typical technique factors, or CTDI values can be measured at the typical clinical technique factors. Tables 1 and 2 below provide a summary of CT reference levels from a variety of national dose surveys.