ABSTRACT Introduction: It is a common finding that despite high levels of specificity and sensitivity, many medical tests are not highly effective in diagnosing diseases exhibiting a low prevalence within a clinical population. What is not widely known or appreciated is how the results of retesting a patient using the same or a different medical or psychological test impacts the estimated probability that a patient has a particular disease. In the absence of a ‘gold standard’ spe-cial techniques are required to understand the error structure of a medical test. Generalizability can provide guid-ance as to whether a serial Bayes model accurately updates the positive predictive value of multiple test results. Methods: In order to understand how sources of error impact a test’s outcome, test results should be sampled across the testing conditions that may contribute to error. A generalizability analysis of appropriately sampled test results should allow researchers to estimate the influence of each error source as a variance component. These results can then be used to determine whether, or under what conditions, the assumption of test independence can be approximately satisfied, and whether Bayes theorem accurately updates probabilities upon retesting. Results: Four hypothetical generalizability study outcomes are displayed as variance component patterns. Each pattern has a different practical implication related to achieving independence between test results and deriving an enhanced PPV through retesting an individual patient. Discussion: The techniques demonstrated in this article can play an important role in achieving an enhanced positive predictive value in medical and psychological diagnostic testing and can help ensure greater confidence in a wide range of testing contexts.
Cite this paper
nullKreiter, C. (2010). Using Generalizability Theory to Evaluate the Applicability of a Serial Bayes Model in Estimating the Positive Predictive Value of Multiple Psychological or Medical Tests. Psychology, 1, 194-198. doi: 10.4236/psych.2010.13026.
 D. L. Katz “Clinical Epidemiology and Evidence Based
Medicine,” Sage Publications, Inc., Thousand Oaks, 2001.
G. A. Diamond, A. Rozanski, J. S. Forrester, D. Morris, B.
H. Pollack, H. M. Staniloff, D. S. Berman and H. J. C.
Swan, “A Model for Assessing the Sensitivity and Speci-
ficity of Tests Subject to Selection Bias,” Journal of
Chronic Disease, Vol. 39, No. 5, 1986, pp. 343-355.
R. M. Henkelman, I. Kay and M. J. Bronskill, “Receiver
Operating Characteristic (ROC) Analysis without Truth,” Medical Decision Making, Vol. 10, No. 1, 1990, pp. 24-29.
L. Joseph, T. W. Gyorkos and L. Coupal, “Bayesian Es-
timation of Disease Prevalence and the Parameters of Di-
agnostic Tests in the Absence of a Gold Standard,” Ameri-
can Journal of Epidemiology, Vol. 141, No. 3, 1995, pp.
D. Rindskopf and W. Rindskopf, “The Value of Latent
Class Analysis in Medical Diagnosis,” Statistics in Medi-
cine, Vol. 5, No. 1, 1986, pp. 21-27.
T. A. Alonza and M. Pepe, “Using a Combination of Ref-
erence Tests to Assess the Accuracy of a New Diagnostic
Test,” Statistics in Medicine, Vol. 18, No. 22, 1999, pp.
S. V. Faraone and M.T. Tsuang, “Measuring Diagnostic
Accuracy in the Absence of a ‘Gold Standard’,” American
Journal of Psychiatry, Vol. 151, No. 5, 1994, pp. 650-657.
R. L. Brennan, “Generalizability Theory,” Springer Ver-
lag, Inc., New York, 2001.