



INVITED REVIEW ARTICLE 

Year : 2022  Volume
: 22
 Issue : 4  Page : 177185 

User's guide to sample size estimation in diagnostic accuracy studies
Haldun Akoglu
Department of Emergency Medicine, Marmara University School of Medicine, Istanbul, Turkey
Date of Submission  25Jul2022 
Date of Decision  30Jul2022 
Date of Acceptance  05Aug2022 
Date of Web Publication  01Oct2022 
Correspondence Address: Haldun Akoglu Department of Emergency Medicine, Marmara University School of Medicine, Istanbul Turkey
Source of Support: None, Conflict of Interest: None
DOI: 10.4103/24522473.357348
Sample size estimation is an overlooked concept and rarely reported in diagnostic accuracy studies, primarily because of the lack of information of clinical researchers on when and how they should estimate sample size. In this review, readers will find sample size estimation procedures for diagnostic tests with dichotomized outcomes, explained by clinically relevant examples in detail. We hope, with the help of practical tables and a free online calculator (https://turkjemergmed.com/calculator), researchers can estimate accurate sample sizes without a need to calculate from equations, and use this review as a practical guide to estimating sample size in diagnostic accuracy studies.
Keywords: Calculator, diagnostic accuracy, online, sample size, sensitivity, specificity
How to cite this article: Akoglu H. User's guide to sample size estimation in diagnostic accuracy studies. Turk J Emerg Med 2022;22:17785 
Introduction   
Diagnostic accuracy studies are essential to achieve a better clinical decisionmaking process. In estimating the diagnostic accuracy of a test and obtaining the desired statistical power, the investigators need to know the minimal sample size required for their experiments. As in all kinds of research, studies with small sample sizes fail to determine an accurate estimate, with wide confidence intervals, and studies with large sample sizes may lead to the wasting of resources.^{[1]} Indeed, sample size estimation is an overlooked concept and rarely reported in diagnostic accuracy studies.^{[2],[3]} Bochman et al. reported in 2005 that only 1 in 40 of the diagnostic accuracy studies published in the top 5 journals of ophthalmology reported a sample size calculation.^{[3]} This is primarily because of the lack of information of clinical researchers on when and how they should estimate sample size.
Therefore, this review aims to help clinical researchers by defining practical sample size estimation techniques for different study designs. We will start with the description of the clinical diagnostic evaluation process. Then, we will define the characteristics and measures of diagnostic accuracy studies. After we summarize the design options, we will define how to estimate the sample size for each of those different designs.
Definitions   
In diagnostic accuracy studies, the test in question is called the index test. The comparative and probably the better test is called the reference standard. The diagnostic evaluation process starts with a list of differential diagnoses, where each one of them has a different probability. Those probabilities are generated with the use of the local epidemiological data, the “gestalt” of the experienced physician, and results of the previous tests. The probability of disease before performing a test is called the prior probability. Physicians order consecutive tests to increase or decrease the probability of those specific diagnoses and narrow down the list. Each diagnosis in this list has its own probability scale (from 0% to 100%) for that patient. There are two important thresholds on that scale: the test threshold marks the disease probability that is high enough to warrant further testing to rule in or out that diagnosis; treatment threshold marks the disease probability that is high enough to accept that diagnosis and start treatment. The prior probability of each disease changes according to the result of each test, which is called the posterior probability. The aim is to move the posterior probabilities above the treatment or below the test threshold with the results of consecutive tests to rule in or out every diagnosis. In the clinical setting, each procedure performed to gather information about the disease probability is a test, such as history taking (age, sex, and presence of comorbidities), measurements (RR, HR, or pSO2), or physical examination (rales, rhonchi, Romberg, etc.). We combine the results of those tests and increase or decrease the probabilities of diagnoses we have in mind, decide to test further, or treat.
For better comprehension, let us assume that a 75yearold bedridden female patient with Alzheimer's disease presented to an emergency department with tachypnea of 30/min, peripheral oxygen saturation of 90%, and tachycardia of 110 bpm. As soon as those data were gathered, a few diagnoses could be listed where pulmonary embolism makes it to the top. In this patient, the probability of pulmonary embolism is above the treatment threshold and ordering a treatment with LMWH (Low Molecular Weight Heparin) is warranted. One may still order tests to rule in or rule out pneumonia, pneumothorax, or other diagnoses, or may order antibiotics if pneumonia makes it above the treatment threshold, too. On the contrary, an Xray may lower the probability of pneumothorax below the test threshold; therefore, pneumothorax could be ruled out. A clinical diagnostician is a detective investigating multiple diagnoses simultaneously, using a bunch of tests to move the probabilities of several diagnoses below or above the test and treatment thresholds.
In classical diagnostic accuracy studies, a categorical or continuous index test variable is compared against a categorical, dichotomized reference standard variable. In this review, we will focus on index tests with a dichotomized outcome (positive or negative). We evaluate the accuracy of the index test by its sensitivity and specificity, which are calculated from the values in the cells of the contingency table comparing those two tests. The sensitivity indicates the proportion of true positives in diseased subjects, and specificity determines the proportion of true negatives in nondiseased subjects. Positive predictive value (PPV) determines the proportion of diseased subjects out of all the positives, and negative predictive value (NPV) determines the proportion of nondiseased subjects out of all negatives.
PPV and NPV are affected by the prior probability (prevalence) of disease in the target population and are rarely used. On the other hand, sensitivity and specificity are not influenced by the prevalence of disease, which is why they are so popular.^{[1]} Their total is a more important metric than the individual values, and they should always be considered together. Tests with the total of sensitivity and specificity closer to 200% are almost perfect. It is no good than tossing a coin if the total of sensitivity and specificity is closer to 100, even one of the values were close to 100. For example, a test with a sensitivity of 90% and specificity of 10 is a test without any clinical diagnostic benefit. Therefore, both metrics were combined in a onedimensional index called likelihood ratio (LR). The positive LR is the ratio of the probability of a positive test in diseased to nondiseased, and the negative LR is the ratio of the probability of a negative test in diseased to nondiseased [Table 1]. Any test with a positive LR above 10 is considered a good test for ruling in, and tests with a negative LR below 0.1 are considered good for ruling out a diagnosis. LRs are not affected by the prevalence of the disease. They are beneficial in comparing two separate tests. Furthermore, the posterior probability of a diagnosis can be calculated with the help of the positive and negative LRs (see online calculator at https://turkjemergmed.com/calculator).
In a comparative analysis, a Type 1 error happens if we reject the null hypothesis (no difference) incorrectly and report a difference, whereas a Type II error happens if we accept the null hypothesis incorrectly and report that there is no difference [Table 1]. Sample size estimation is performed to calculate how many patients are required to avoid a Type 1 or a Type 2 Error.^{[4]}
Design Options of the Diagnostic Accuracy Studies   
The classical design is a crosssectional cohort study, or singletest design, where all consecutive patients suspected of the target disease or condition are tested with the index test and the reference standard [Figure 1].^{[6]} This approach may be modified to delayedtype crosssectional, casereferent, or test resultbased sampling designs, or cohort and casecontrol designs may be used instead.^{[5]} In a comparative design, the index test is compared to a previously evaluated comparator test in a paired or unpaired fashion [Figure 1]. In the comparative unpaired design (betweensubjects), study participants are randomly assigned to either the index or comparator test. Participants are tested with one of the two tests, not both. Then, the disease status of every participant is confirmed with the reference standard. This design is preferred when researchers aim to evaluate the impact of diagnostic testing on clinical decisionmaking, patient prognosis, and reallife utility of the index test. These are the “diagnostic randomized controlled trial” and the beforeafter type studies.^{[5]} In the comparative paired design (withinsubjects), index, comparator, and reference standard tests were performed on all subjects. Since the variability of the study results is decreased, the paired design is preferred if feasible and justifiable.^{[7],[8]}  Figure 1: Major study designs that are used to compare the diagnostic accuracy of tests
Click here to view 
Sample Size Estimation in Diagnostic Accuracy Studies   
There are four major designs to compare a dichotomized index test with a dichotomized reference standard. The appropriate equations that should be used for the estimation of sample size in each of those situations are previously summarized by Obuchowski [Table 2].^{[9]} We prepared offline tables [Table 2], [Table 3], [Table 4], [Table 5], [Table 6] and an online calculator (https://turkjemergmed.com/calculator) for the use of researchers to estimate the sample size for their diagnostic accuracy studies.  Table 3: Sample size estimates at predetermined sensitivity and specificity values for various disease prevalence states Values are calculated at marginal errors of (A) 3%, (B) 5%, and (C) 7%.
Click here to view 
 Table 4: Sample size estimates for a difference of at least 5% in coprimary endpoints with a Type 1 error of 5% and A) Power of 90%, and B) 80%
Click here to view 
 Table 5: Sample size estimation for comparing two independent proportions, unpaired groups for A) Power of 90% and B) 80%
Click here to view 
Singletest design (new diagnostic tests)
If a new diagnostic test (new test or new to the study population) is investigated in a prospective cohort that the disease status and prevalence are known, this approach is preferred [[Table 2], Equation 1].^{[1]} Researchers try to be sure with a confidence level of 95% that their predetermined sensitivity or specificity lies within the marginal error of d (desired width of onehalf of the confidence interval [CI]). Sensitivity and specificity values are ascertained by previously published data or clinician experience/judgment.
For example, let us assume that we are investigating the value of a new test for diagnostic screening. We aim for a sensitivity of 90% in a cohort with a known disease prevalence of 10%. We want maximum marginal error of the estimate not to exceed 5% with a CI of 95%. So, we select [Table 3]B, find the row for the disease prevalence of 10%, and read the cell for the column of 90% sensitivity, which is 1383. We estimate that 10% of the 1383 subjects will be diseased (n = 138), and 90% will be nondiseased.
Singletest design, comparing the accuracy of a single test to a null value
If the true disease status of the patients is unknown at the time of enrollment, those studies are called confirmatory diagnostic accuracy studies.^{[7]} Obuchowski defined this approach as “comparing the sensitivity of a test to a prespecified value” [[Table 2], Equation 2].^{[9]} For example, surgery is the reference standard test for the diagnosis of acute appendicitis, but it is invasive. The prevalence of acute appendicitis confirmed by surgery is around 40%, which means that 60% of the patients suspected of acute appendicitis had an unnecessary surgery. Therefore, noninvasive alternatives such as noncontrastenhanced computed tomography (CT) have emerged, and it has been shown to have a sensitivity of 90%.^{[10]} We hypothesize that contrastenhanced CT is better, with a sensitivity around 95%. How many patients do we need to recruit if we need to be sure the sensitivity of 95% is statistically significant from 90% with a power of 90% and type 1 error of 5%?
[Table 4] presents precalculated sample size estimates for studies comparing the accuracy of single index test to a null value. [Table 4] includes estimates for a type 1 error of 5% and power of 90%. The cell intersecting expected probability of 95% (P1, contrastenhanced CT) and null value of 90% (P0, noncontrastenhanced CT) reveals that at least 340 diseased subjects are needed (patients with acute appendicitis confirmed with surgery). We use Equations 4a and 4b in [Table 2] to adjust for prevalence (acute appendicitis prevalence is 40%, we divide 340 by 0.4 = 849). For this study, at least 849 subjects with a suspected acute appendicitis are needed. Please be reminded that those calculations are corrected with Yates' continuity correction.
Sometimes researchers aim for sensitivity and specificity simultaneously and want to estimate a sample size that is enough for both. Since sensitivity and specificity are calculated in different groups (diseased vs. nondiseased), two separate sample sizes are calculated for a power of 90%, so the final power of the study would be 80%. Let's enhance the example above and assume that we also want an adequate sample size for a specificity hypothesis, too. We think that the specificity of contrastenhanced CT would be 85%, and we want to be sure that it is significantly higher than the specificity of noncontrastenhanced CT (80%). To calculate the sample size estimate for specificity at a power of 90%, we again use [Table 4]. The cell intersecting P1 (noncontrastenhanced CT) of 85% and P0 (null, contrastenhanced CT) of 80% reveals that we need at least 656 nondiseased subjects (patients without acute appendicitis confirmed with surgery). We use Equations 4a and 4b in [Table 2] to adjust specificity for disease prevalence (n/(1 − prevalence) = 656/(1 − 0.4)) and find that we need to recruit 1093 subjects. Since the higher of the two estimates (849 for sensitivity and 1093 for specificity) is 1093, we select this estimate for a power of 80% and type 1 error of 5% for both outcomes.
According to Beam, Yates' continuity correction should be used to compare proportions. Therefore, we present corrected values in [Table 4], [Table 5], [Table 6] and both corrected and uncorrected values on the online calculator.^{[11]} Several authors reported calculations that did not incorporate disease prevalence, and several others did, which we also preferred in this review.^{[12],[13]}
Studies comparing two diagnostic tests
As mentioned above, comparative design can be unpaired or paired [Figure 1]. Beam described the formulas to estimate sample sizes for both designs [[Table 2], Equation 3a and b].^{[11]} Since we want to be sure if one of the tests is significantly different than the other, calculations for onesided significance levels are sufficient.
Unpaired design (betweensubjects)
Proportions will be compared between different groups (unpaired) with a Chisquared test. Therefore, the sample size for each group would be estimated for the Chisquared test with Yates' continuity correction, using the method given by Casagrande and Pike [[Table 2], Equation 5].^{[14]}
Let us assume we want to compare the sensitivity of two alternative diagnostic pathways, where the contender has 70% sensitivity. We want to design our study so that there is an 80% chance of detecting a difference when our index test has at least a sensitivity of 80% (or a difference of 10%). We accept the significance level as 5%, with a onesided hypothesis. In [Table 5] (for the power of 80%), we check the cell intersecting 70% and 80%, and find that at least 250 subjects are needed for each pathway, making the total estimate 500 subjects.
Paired design (withinsubjects)
In this design, proportions will be compared between paired samples. Therefore, the sample size for the entire study would be estimated for McNemar's test, using the method defined by Connor et al.^{[15]} Those two diagnostics tests agree with each other with variable degrees (probability of disagreement [Ψ)]), which affects the estimated sample size. On one end, tests disagree with each other just with the degree of the difference in proportions (sensitivity or specificity [Ψ_{min}=P_{2}P_{1}]). Conversely, they agree with each other just by chance, where the probability of disagreement is maximum (Ψ_{max}=P_{1}×(1P_{1})+ P_{2 }(1P_{1})). Those are the two boundaries of the estimated sample size range for the paired design, and the mean of those two ends may be enough in most situations.
Let us work the same example above for a paired design: first, we check [Table 6] (lower boundary) for a 10% difference in proportions and 80% power. If the disagreement probability of the tests is minimum, a sample size of 78 subjects would be enough. Second, we check [Table 6] (higher boundary) for a power of 80% and read the cell intersecting 70% and 80%. If both tests agree with each other just by chance (maximum disagreement), we would need at least 252 subjects. The mean value of this range (78 to 252, n = 165) or the higher boundary (n = 252) can be selected as the sample size. Please note that, even at the highest probability of disagreement, almost half of the sample size would be enough with paired design compared to the unpaired design.
Discussion   
We reviewed methods for estimating the minimum required sample size for different study designs in diagnostic accuracy research. This review is performed by a clinical researcher with ease of use for clinical researchers in mind. There are alternative and better methods to estimate the sample size for the procedures described above. Researchers should consult a statistician whenever they need a more accurate or sophisticated approach.
The accuracy of sample size estimates heavily depends on how closely the required assumptions are met.^{[11]} Study results may fall far from the researchers' assumptions, and post hoc (or interim) power and sample size analyses may be needed in those extreme conditions.
Debates are ongoing if Yates' continuity correction should be used, if correcting for the disease prevalence is needed when it is unknown before the enrollment phase, or if Connor et al.'s (Equation 3b) formula is too optimistic by underestimating the sample size.^{[11],[15]} Researchers should include a safe limit to control for those debatable points and aim for an optimal sample size.
Conclusion   
Sample size estimation is an overlooked concept and rarely reported in diagnostic accuracy studies, primarily because of the lack of information of clinical researchers on when and how they should estimate sample size. We hope the tables and the online calculator supplemented to this review may be used as a guide to estimate sample size in diagnostic accuracy studies.
Supplement
Online Calculator: https://turkjemergmed.com/calculator
Author contributions (CRediT)
HA completed this review on his own.
Acknowledgments
I thank Ozan Konrot for being one of the best in problemsolving with his charming smile. The online calculator could not have been prepared without his devoted input and relentless efforts. I also would like to thank Gokhan Aksel and Seref Kerem Corbacıoglu. They have been and hopefully will be my closest peers during my journey through being an enthusiast, a student, a mentor, and a teacher of biostatistics, journalogy, and clinical research methodology.
Conflicts of interest
None declared.
Funding
None.
References   
1.  HajianTilaki K. Sample size estimation in diagnostic test studies of biomedical informatics. J Biomed Inform 2014;48:193204. 
2.  Bachmann LM, Puhan MA, ter Riet G, Bossuyt PM. Sample sizes of studies on diagnostic accuracy: Literature survey. BMJ 2006;332:11279. 
3.  Bochmann F, Johnson Z, AzuaraBlanco A. Sample size in studies on diagnostic accuracy in ophthalmology: A literature survey. Br J Ophthalmol 2007;91:898900. 
4.  Jones SR, Carley S, Harrison M. An introduction to power and sample size estimation. Emerg Med J 2003;20:4538. 
5.  Holtman GA, Berger MY, Burger H, Deeks JJ, DonnerBanzhoff N, Fanshawe TR, et al. Development of practical recommendations for diagnostic accuracy studies in lowprevalence situations. J Clin Epidemiol 2019;114:3848. 
6.  Knottnerus JA, Buntinx F, eds. The Evidence Base of Clinical Diagnosis: Theory and Methods of Diagnostic Research. 2nd edition. Blackwell Publishing Ltd; 2011. 
7.  Stark M, Hesse M, Brannath W, Zapf A. Blinded sample size reestimation in a comparative diagnostic accuracy study. BMC Med Res Methodol 2022;22:115. 
8.  Sitch AJ, Dekkers OM, Scholefield BR, Takwoingi Y. Introduction to diagnostic test accuracy studies. Eur J Endocrinol 2021;184:E59. 
9.  Obuchowski NA. Sample size calculations in studies of test accuracy. Stat Methods Med Res 1998;7:37192. 
10.  Rud B, Vejborg TS, Rappeport ED, Reitsma JB, WilleJørgensen P. Computed tomography for diagnosis of acute appendicitis in adults. Cochrane Database Syst Rev 2019;2019:CD009977. 
11.  Beam CA. Strategies for improving power in diagnostic radiology research. AJR Am J Roentgenol 1992;159:6317. 
12.  Buderer NM. Statistical methodology: I. Incorporating the prevalence of disease into the sample size calculation for sensitivity and specificity. Acad Emerg Med 1996;3:895900. 
13.  
14.  Casagrande JT, Pike MC. An improved approximate formula for calculating sample sizes for comparing two binomial distributions. Biometrics 1978;34:4836. 
15.  Connor RJ. Sample size for testing differences in proportions for the pairedsample design. Biometrics 1987;43:20711. 
[Figure 1]
[Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6]
