OJS  Vol.7 No.5 , October 2017
The Scaling Constant D in Item Response Theory
Author(s) Gregory Camilli
In item response theory (IRT), the scaling constant D = 1.7 is used to scale a discrimination coefficient a estimated with the logistic model to the normal metric. Empirical verification is provided that Savalei’s [1] proposed a scaling constant of D = 1.749 based on Kullback-Leibler divergence appears to give the best empirical approximation. However, the understanding of this issue as one of the accuracy of the approximation is incorrect for two reasons. First, scaling does not affect the fit of the logistic model to the data. Second, the best scaling constant to the normal metric varies with item difficulty, and the constant D = 1.749 is best thought of as the average of scaling transformations across items. The reason why the traditional scaling with D = 1.7 is used is simply because it preserves historical interpretation of the metric of item discrimination parameters.
Cite this paper
Camilli, G. (2017) The Scaling Constant D in Item Response Theory. Open Journal of Statistics, 7, 780-785. doi: 10.4236/ojs.2017.75055.
[1]   Savalei, V. (2006) Logistic Approximation to the Normal: The KL Rationale. Psychometrika, 71, 763-767.

[2]   Cox, D.R. (1970) The Analysis of Binary Data. Methuen, London.

[3]   Johnson, N.J. and Kotz, S. (1970) Continuous Univariate Distributions-2. Houghton Mifflin, Boston.

[4]   Haley, D.C. (1952) Estimation of the Dosage Mortality Relationship When the Dose Is Subject to Error Technical Report No. 15 (Office of Naval Research Contract No. 25140, NR-342-022). Applied Mathematics and Statistics Laboratory, Stanford University.

[5]   Camilli, G. (1994) Origin of the Scaling Constant in Item Response Theory. Journal of Educational and Behavioral Statistics, 19, 293-295.

[6]   Camilli, G. (1995) Correction. Journal of Educational and Behavioral Statistics, 20, np.

[7]   Pingel, R. (2014) Some Approximations of the Logistic Distribution with Application to the Covariance Matrix of Logistic Regression. Statistics and Probability Letters, 85, 63-68.

[8]   Houts, C.R. and Cai, L. (2013) flexMIRT Users Manual Version 2.0: Flexible Multilevel Item Factor Analysis and Test Scoring. Vector Psychometric Group, Seattle.

[9]   Muraki, E. (1992) A Generalized Partial Credit Model: Application of an EM Algorithm. Applied Psychological Measurement, 16, 159-176.

[10]   Samejima, F. (1969) Estimation of Latent Ability Using a Response Pattern of Graded Scores. (Psychometrika Monograph, No. 17). Psychometric Society, Richmond.

[11]   Birnbaum, A. (1968) Some Latent Trait Models and Their Use in Inferring an Examinee’s Ability. In: Lord, F.M. and Novick, M.R., Eds., Statistical Theories of Mental Test Scores, Addison-Wesley, Reading, 397-479.