Overdiagnosis associated with breast cancer screening has been the subject of much attention in recent years. The notion that cancer screening—largely believed to be beneficial—could actually be harmful is simultaneously fascinating and difficult to believe.
With the publication of multiple studies suggesting alarmingly high frequencies of overdiagnosis, calls for shared and informed decision-making have intensified. By being aware of both the benefits and the harms of screening, women should ideally be able to make decisions about screening that are in accordance with their preferences and values.
As knowledgeable consumers of health care, we should absolutely be aware of both the upside and the downside of any intervention that we are offered. However, the notion of informed decision-making presumes that the information is actually there to be packaged up and provided to the patient.
Unfortunately, this is not true in the case of overdiagnosis due to mammography screening. What is true is that the plethora of studies on the topic and the repeated citation of a few high-profile estimates have given us a false sense of security about the estimated frequency of overdiagnosis. In fact, most published estimates are either biased, misinterpreted, or both.
Dubious Estimates
Recently, an article in the journal Health Affairs was published with the title, “National Expenditure for False-Positive Mammograms and Breast Cancer Overdiagnoses Estimated at $4 Billion a Year.”1 Anybody would be forgiven for presuming that the number of overdiagnoses must be known in order to derive national cost figures. However, a closer look at the article reveals that this is not the case. Not only does the article consider a wide range of overdiagnosis frequencies but the validity of the two studies2,3 specifically cited —yielding estimates of 31% and 22% of cases diagnosed—is controversial.
The estimate of 31% comes from comparing the observed incidence of invasive and in situ breast cancer in the United States in 2008 with an extrapolation of what the incidence would have been in 2008 in the absence of screening.2 However, there are no concrete data on how breast cancer rates would have changed in the relevant age group from the late 1970s, when mammography began, to almost 3 decades later. Indeed, the duration of the extrapolation—30 years—means that the results are highly sensitive to any assumption made about what the trend in incidence would have been in the absence of screening.
The estimate of 22% comes from a clinical trial that included a control group.3 Thus, one does not have to guess at what the incidence would have been without screening. Even so, there are issues with this figure. The trial was a stop-screen trial of clinical breast exam vs mammography plus clinical breast exam. This means that women were offered screening only for the first 5 years and simply followed for mortality thereafter.
The 22% represents the excess invasive cases in the screened arm after 15 years divided by the cases detected on the same arm in the first 5 years. In this setting, the correct denominator for the overdiagnosis estimate is debatable: should it be cases detected after 5 years or cases detected after 15 years? It seems that the jury is still out on this one.
Perhaps equally important, the screening behavior on the two arms was not monitored after the initial screening period. If, after this time, women who had been assigned to the screening arm sought mammography more frequently than women who had been assigned to the control arm, the estimated number of overdiagnosed cases would be too high.
Meta-analysis Cited
A recently reported study by Hersch et al,4 reviewed in this issue of The ASCO Post, found that use of a decision aid with information on overdetection was associated with a reduced rate of positive attitudes toward screening and a reduced intention to be screened compared with use of a decision aid not including such information. The investigators used neither of the above estimates of overdiagnosis in their decision aid, citing instead a number from a meta-analysis conducted by the Independent UK Panel on Breast Cancer Screening,5 to whom they attribute this statement: “Although diverse estimates of overdetection are available, derived from various data and methods, randomised trials provide the best evidence for the extent of overdetection.”
This too bears closer scrutiny, because overdiagnosis estimates based on cumulative excess incidence in a trial are highly inflated under typical follow-up durations, which tend to be modest. Even with longer follow-up, conditions for validity are often not satisfied. Thus, relying on clinical trials to automatically provide unbiased estimates of overdiagnosis is problematic.
Further Challenges to Validity
Unfortunately, most published estimates of overdiagnosis suffer from such limitations. Consequently, at this point, we still don’t really know how frequently breast cancer cases detected by mammography are overdiagnosed. Even the Independent UK Panel, in their report, noted that their estimates were “the best estimates from a paucity of reliable data.”
Many methodologic investigations have pointed out the challenges to validity of available estimates, and some have even provided more mathematically based alternatives, which tend to produce lower values. But, clearly, methods and media don’t mix, and simpler studies that use excess incidence as a proxy for diagnosis have dominated the popular press.
Hersch et al deserve to be commended for their trial, which not only provided information about the dismal state of public knowledge about overdiagnosis, but also indicated that improving the state of knowledge could influence women’s screening behavior. However, it is unclear whether the women who got the enhanced decision aid were influenced more by the information that overdiagnosis exists or by the little dots in the accompanying graphic showing how many screen-detected cases it affects. Hopefully, the former was the principal factor; at least the information that overdiagnosis does exist is watertight. ■
Disclosure: Dr. Etzioni reported no potential conflicts of interest.
References
1. Ong M-S, Mandl KD: National expenditure for false-positive mammograms and breast cancer overdiagnoses estimated at $4 billion a year. Health Aff 34:4576-4583, 2015.
2. Bleyer A, Welch HG: Effect of three decades of screening mammography on breast-cancer incidence. N Engl J Med 367:1998-2005, 2012.
3. Miller AB, Wall C, Baines CJ, et al: Twenty five year follow-up for breast cancer incidence and mortality of the Canadian National Breast Screening Study: Randomised screening trial. BMJ 348:g366, 2014.
4. Hersch J, Barratt A, Jansen J, et al: Use of a decision aid including information on overdetection to support informed choice about breast cancer screening: A randomised controlled trial. Lancet. February 18, 2015 (early release online).
5. Independent UK Panel on Breast Cancer Screening: The benefits and harms of breast cancer screening: An independent review. Lancet 380:1778-1786, 2012.
Dr. Etzioni is a biostatistician and member of the Public Health Sciences Division, Fred Hutchinson Cancer Research Center, Seattle.