Investigating Bias in the Evaluation Model Used to Evaluate the Effect of Breast Cancer Screening: A Simulation Study
Eeva-Liisa Røssell,
Jakob Hansen Viuff,
Mette Lise Lousdal and
Henrik Støvring
Additional contact information
Eeva-Liisa Røssell: Department of Public Health, Aarhus University, Aarhus C, Denmark
Jakob Hansen Viuff: Department of Clinical Epidemiology, Aarhus University, Aarhus N, Denmark
Mette Lise Lousdal: Department of Clinical Epidemiology, Aarhus University, Aarhus N, Denmark
Henrik Støvring: Department of Public Health, Aarhus University, Aarhus C, Denmark
Medical Decision Making, 2025, vol. 45, issue 8, 1025-1033
Abstract:
Background. Observational studies are used to evaluate the effect of breast cancer screening programs, but their validity depends on use of different study designs. One of these is the evaluation model, which extends follow-up after screening only if women have been diagnosed with breast cancer during the screening program. However, to avoid lead-time bias, the inclusion of risk time should be based on screening invitation and not breast cancer diagnosis. The aim of this study is to investigate potential bias induced by the evaluation model. Methods. We used large-scale simulated datasets to investigate the evaluation model. Simulation model parameters for age-dependent breast cancer incidence, survival, breast cancer mortality, and all-cause mortality were obtained from Norwegian registries. Data were restricted to women aged 48 to 90 y and a period before screening implementation, 1986 to 1995. Simulation parameters were estimated for each of 2 periods (1986–1990 and 1991–1995). For the simulated datasets, 50% were randomly assigned to screening and 50% were not. Simulation scenarios depended on the magnitude of screening effect and level of overdiagnosis. For each scenario, we applied 2 study designs, the evaluation model and ordinary incidence-based mortality, to estimate breast cancer mortality rates for the screening and nonscreening groups. For each design, these rates were compared to assess potential bias. Results. In scenarios with no screening effect and no overdiagnosis, the evaluation model estimated 6% to 8% reductions in breast cancer mortality due to lead-time bias. Bias increased with overdiagnosis. Conclusions. The evaluation model was biased by lead time, especially in scenarios with overdiagnosis. Thus, the attempt to capture more of the screening effect using the evaluation model comes at the risk of introducing bias. Highlights The validity of observational studies of breast cancer screening programs depends on their study design being able to eliminate lead-time bias. The evaluation model has been used to evaluate breast cancer screening in recent studies but introduces a study design based on breast cancer diagnosis that may introduce lead-time bias. We used large-scale simulated datasets to compare study designs used to evaluate screening. We found that the evaluation model was biased by lead time and estimated reductions in breast cancer mortality in scenarios with no screening effect.
Keywords: mammography; screening; bias; cohort studies; observational studies; mortality (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.sagepub.com/doi/10.1177/0272989X251352570 (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:sae:medema:v:45:y:2025:i:8:p:1025-1033
DOI: 10.1177/0272989X251352570
Access Statistics for this article
More articles in Medical Decision Making
Bibliographic data for series maintained by SAGE Publications ().