Go to procedure

Diagnostic Effectiveness.

(With note on Corona Virus testing)

Input.

Fill the two by two table with integer values. A proportional prevalence value can be given optionally. or

Give a proportional sensitivity value in the ++ box and a proportional specificity value in the -- box. A proportional prevalence value can be given optionally. On top of that, if you want, you can give a population or sample size.

Explanation.

If you think there isn't a lot you can do with a 2*2 table, think again. The analysis of the diagnostic effectiveness of a test is quite complicated and in a two by two table you can study all the intricacies which play a role in the development and application of diagnostic instruments. The analysis presents the situation of a test with two possible results, negative or positive, diseased or healthy, fail or pass, against the objective measurement of the outcome, also measured dichotomously. Outcome measurement might for example be that we wait a while for the disease to develop or not to develop, use the result of a highly valid laboratory procedure, confirmative surgery, or if the pupil indeed follows the predicted career, to take an example outside medicine. The input for the procedure is simple, in a two by two table you classify the number of times a test did a correct positive prediction that the individual is affected by the problem, a correct negative prediction, an incorrect positive prediction, an incorrect negative predictions.

Following, the various indicators that are presented in the output of this SISA procedure are discussed. In the discussion attention is given to various considerations in diagnostic test development theory. Lastly, most output is presented with estimates of the variance and standard errors. Often the continuity corrected Wilson (which is equivalent to Fleiss's quadratic confidence interval) and sometimes Wald's Confidence Intervals are also presented; it should be considered that for the data that is used in test development these are to be preferred above the ones directly based on the standard errors. If one would want to use another confidence interval please note the percentage and the number of cases on which this percentage is based and use the one mean procedure.

Summarizing all of the above it seems that sensitivity and specificity and the two predictive accuracies are probably the most valuable of the indicators. Sensitivity and specificity give a good view of the quality of the test relatively independent of circumstances. The predictive accuracies give a view of what happens in different practical situations in terms of numbers and proportions tested with correct and incorrect results. Predictive accuracies also give the post test probability of having the disease, an essential piece of information to communicate to the patient together with his or her test result.

For a sample size calculation give either a proportional sensitivity value in the ++ box, or a proportional specificity value in the – box. You must give a proportional prevalence value in the prevalence box. Give the maximum width of the confidence interval around the sensitivety or the specificity as a proportion in the bottom Number/Size box. Sample sizes are calculated according to Buderer’s formula.

Note on Corona Virus testing.

For the general population 7.8% would test positive, and would get their immunity passport, in 3% of the case that would be justified, in 4.8% of the cases it wouldn’t. The positive predictive accuracy is 38.7%, which is the percentage of passport holders who indeed have immunity. The rest would run, and be, a considerable risk.

This is the reason why practitioners prefer to test only symptomatic individuals, because the baseline prevalence will be higher among these groups and the outcome more acceptable. Or to test among groups who have been particularly exposed to the virus and might therefore have a higher base prevalence, such as healthcare workers. For healthcare workers it might be that if they test positive to an antibody test they are immune and run less risks in their work and are less of a risk to the patients. They might no longer need as much protection. Say, 25% of health care workers have previously been ill of Corona. Then we fill in 0.25 in the prevalence box:

https://www.quantitativeskills.com/sisa/statistics/diagnos.php?n11=00.60&n12=00&n21=00&n22=00.9500&CI=95&CItype=CCwilson&prev=0.250&NN=00&Submit1=Calculate

The positive predictive accuracy is now 80%. These are the healthcare workers who maybe can do with less protection after a positive antibody test and are no longer a risk to others or themselves. The point is, of the 100% health care workers who tested positive we do not know who are the 80% who were correctly classified and who are the 20% who were not correctly classified.