More information about text formats
Chaltiel and Hill show that studies based on individual data indicate a low percentage of overdiagnosis from mammography screening. How to explain the vast excess of cancers that is consistently associated to mammography screening in observations on aggregated data? We have shown that this excess occur after more than 5 years after mammography, in every country, and can only be interpreted as originating from radiation-induced cancers (1,2).
The absence of discussion on this issue is worrying, because Catherine Hill is perfectly aware that I have been dismissed from my permanent position at the INSERM after alerting on these mammography-induced cancers.
1) Corcos D. Breast cancer incidence as a function of the number of previous mammograms: analysis of the NHS screening programme. BioRxiv 2017. doi: https://doi.org/10.1101/238527 .
2) Corcos D & Bleyer A. Epidemiologic Signatures in Cancer. New Engl J Med. 2020. 382(1):96. doi: 10.1056/NEJMc1914747
When the Independent UK Panel, led by Sir Michael Marmot, estimated breast cancer overdiagnosis they used the gold standard - randomised trials with long follow-up without screening in the control group. The Panel chose not to use the observational studies because “this method could give no reliable estimate of the extent of overdiagnosis.”(1) Inherent biases in observational studies are also the reason that the UK National Screening Committee requires high-quality, randomised trial evidence on benefits and harm before screening is introduced.(2)
Conversely, Chaltiel and Hill (3) dismiss the gold standard estimates of overdiagnosis and focus on an observational study (4) while they reject the estimates from another observational study (5) that agrees more closely with those of the randomized trials. Notably the estimates from the trials were consistent with each other. Chaltiel and Hill’s choice conflicts with the fundamental principles of evidence-based medicine.
An impartial assessment of the risk of bias of all observational studies of breast cancer overdiagnosis found that many studies, including ones using individual data, were at high risk of bias because of selection bias, confounding, inadequate adjustment for lead time and non-transparent methods.(6) It is misleadingly simplistic to assert that having individual level data provides protection against significant bias, as we explain below.
Chaltiel and Hill’s study design of choice is an...
Chaltiel and Hill’s study design of choice is an age-period-cohort (APC) analysis, a notably complex design that relies on a strong statistical assumption of distinct, independent, and separable effects of the three core variables in APC analyses, resulting in the well-recognized “identifiability problem” of the method.(7) They also superimpose the additional variable of geographic variation. As the authors state, “there was no obvious control population allowing direct estimation of overdiagnosis” so they generate expected breast cancer incidence in a pseudo-control population from a model that attempts to simultaneously accommodate screening invitation, period, region, and generation, along with interactions between period and generation (footnote to their Table 1). The footnote only hints at the inherent complexities. The number of cases in the absence of screening is obviously critical to the resulting calculation of overdiagnosis; and there are a lot of moving parts in the proposed model, each with its unknown degree of uncertainty. The estimated number of expected cases in Table 1 is therefore a form of pseudo-precision, and it would be a mistake to equate this complexity with accuracy. Further uncertainty comes from low numbers – the population of Funen women aged 50 to 69 years was about 50,000 and Njor et al included less than half of these women (those aged 59 to 70 years), ending up with an ‘intervention group’ much smaller than those groups which informed the estimates by Marmot, and that used by Jørgensen et al. who included all women aged 50 to 69 years in all Danish regions that offered screening (about 100,000 women). By excluding women in their 50’s, Njor’s analysis also excluded many of the crucial first (prevalence) screening rounds where overdiagnosis is likely most pronounced, and as such their results do not fully represent the target population in national breast cancer screening programmes.
The UK Panel presented four ways of calculating percentage estimates of overdiagnosis and preferred two of them, which they presented in a meta-analysis: 10.7% of all breast cancers diagnosed during active screening years plus the rest of the screened women’s lives, and 19.0% of all breast cancers diagnosed during the years of active participation in screening. There is no “right” percentage - each addresses a different question. Yet Chaltiel and Hill present all percentage estimates (e.g. in their Figure 1) as if they were directly comparable. They are not and the estimates from Njor et al and Jørgensen et al. cannot be directly compared as they used different denominators. Presenting percentages that are not comparable as if they were perpetuates the confusion and disagreement that exists over breast cancer diagnosis estimates, and should be avoided.
Lastly, Chaltiel and Hill took the article of Spix et al as an example for overestimation of overdiagnosis. However, overdiagnosis only slightly reduced from 71% to 62% after correction for neuroblastoma found after the screening period.(8) More important, the data used by Spix et al imply fewer neuroblastomas than expected (i.e., if screening had not existed) when children are older and no longer screened. However, Danish data showed no observable compensatory drop among women previously screened as they aged past 70, in the years following the period in which they had been screened. Jørgensen et al performed extensive analyses to look for such a deficit at the end of a long observation period when all women over 70 years would have been offered screening previously. There was no such decline.(9) Similar observations have been made in countries like Sweden and the Netherlands; in these two countries after thirty years of high participation to breast screening, breast cancer incidence rates have not declined among older women no longer invited to screening.(10) (11) .
Comparing incidence trends in breast screening with those of other cancer screening programmes is informative. Ovarian cancer screening likely causes little overdiagnosis and its effect on incidence differs markedly from that of breast screening.(12) Likewise, cervix screening and colorectal cancer screening can cause declines in incidence of invasive cancers due to removal of precursors, albeit at the cost of very large increases in non-invasive intra epithelial lesions, many of which regress spontaneously. In contrast, although about 20-25% of cases detected with breast screening are DCIS, incidence of invasive breast cancer continues to increase, even after decades of screening. Breast cancer screening affects incidence rates in ways that carry all the hallmarks of substantial overdiagnosis.(13)
1. Marmot MG, Altman DG, Cameron DA et al. The benefits and harms of breast cancer screening: an independent review. British Journal of Cancer. 2013;108(11):2205-40.
2. National Screening Committee. Criteria for appraising the viability, effectiveness and appropriateness of a screening programme. 2015.
https://www.gov.uk/government/publications/evidence-review-criteria-nati... Accessed 9 July 2021.
3. Chaltiel D, Hill C. Estimations of overdiagnosis in breast cancer screening vary between 0% and over 50%: why? BMJ Open. 2021;11(6):e046353.
4. Njor SH, Olsen AH, Blichert-Toft M, Schwartz W, Vejborg I, Lynge E. Overdiagnosis in screening mammography in Denmark: population based cohort study. BMJ : British Medical Journal. 2013;346:f1064.
5. Jørgensen KJ, Zahl P-H, Gøtzsche PC. Overdiagnosis in organised mammography screening in Denmark. A comparative study. BMC Women's Health. 2009;9(1):36.
6. Carter JL, Coletti RJ, Harris RP. Quantifying and monitoring overdiagnosis in cancer screening: a systematic review of methods. BMJ : British Medical Journal. 2015;350:g7773.
7. Browning MC, Ian; Knoef, Marike. The age-period cohort problem: Set identification and point identification. 2012.
8. Spix C, Michaelis J, Berthold F, Erttmann R, Sander J, Schilling FH. Lead-time and overdiagnosis estimation in neuroblastoma screening. Stat Med. 2003;22(18):2877-92.
9. Jørgensen KJ GP, Kalager M , et al . Breast cancer screening in Denmark: a cohort study of tumor size and overdiagnosis. Annals of Internal Medicine 2017;166:313-23.
10. The NORDCAN project https://www-dep.iarc.fr/NORDCAN/english/frame.asp. Accessed 9 July 2021.
11. Autier P, Boniol, M., Koechlin, A., Pizot, C. Boniol, M. Mammography screening effectiveness and overdiagnosis in the Netherlands: population based study. BMJ 2017;359:j5224
12. Jacobs IJ, Menon U, Ryan A, Gentry-Maharaj A, Burnell M, Kalsi JK, et al. Ovarian cancer screening and mortality in the UK Collaborative Trial of Ovarian Cancer Screening (UKCTOCS): a randomised controlled trial. Lancet. 2016;387(10022):945-56.
13. Welch HG, Kramer BS, Black WC. Epidemiologic Signatures in Cancer. New England Journal of Medicine. 2019;381(14):1378-86.