ITEM ANALYSIS OF MULTIPLE-CHOICE QUESTIONS UNDER CLASSICAL TEST THEORY

Authors

Keywords:

Item Analysis, Item Difficulty, Item Discrimination, Multiple-Choice Questions, Classical Test Theory

Abstract

This academic article aims to present principles and practical guidelines for analyzing the quality of multiple-choice questions based on Classical Test Theory. The quality of items is a important factor in the validity of assessing students' true abilities. Item analysis represents a statistical process that systematically examines the effectiveness of items through three essential indices. First, the item difficulty index reflects the proportion of examinees who answer the item correctly. Second, the item discrimination index demonstrates the ability of a item to differentiate between high-ability and low-ability examinees. Third, distractor efficiency analyzes the capability of distractors to attract low-ability examinees. Moreover, the study systematically presents the formulas, calculation procedures, and interpretive guidelines for the indices. It presents practical approaches that teachers and researchers can apply using basic software programs without requiring sophisticated statistical packages. Understanding the principles of item analysis and improving items to meet quality standards will enhance the accuracy, validity, and fairness of learning achievement.

References

ศิริชัย กาญจนวาสี. (2556). ทฤษฎีการทดสอบแบบดั้งเดิม (พิมพ์ครั้งที่ 7). กรุงเทพฯ: สำนักพิมพ์จุฬาลงกรณ์มหาวิทยาลัย.

Allen, M. J. & Yen, W. M. (1979). Introduction to Measurement Theory. Monterey, CA.: Brooks/Cole.

Crocker, L. & Algina, J. (1986). Introduction to classical and modern test theory. New York, NY: Holt, Rinehart and Winston.

Ebel, R. L. & Frisbie, D. A. (1991). Essentials of educational measurement (5th ed.). Englewood Cliffs, NJ: Prentice-Hall.

Gierl, M. J. & Bulut, O. (2017). Using distractor analysis to evaluate item quality. In M. J. Gierl & O. Bulut (Eds.), Handbook of diagnostic classification models. (pp. 81–108). Cham: Springer.

Gronlund, N. E. & Linn, R. L. (1990). Measurement and evaluation in teaching (6th ed.). New York, NY: Macmillan.

Haladyna, T. M. et al. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15(3), 309-333.

Kehoe, J. (1995). Basic item analysis for multiple-choice tests. Practical Assessment, Research & Evaluation, 4(10), 1-3.

Kelly, T. L. (1939). The Selection of Upper and Lower Groups for the Validation of Test Items. Journal of Educational Psychology, 30, 17-24.

Krishnan, D. R. (2013). Statistical estimation techniques requiring nearly normal sampling distributions. Retrieved February 1, 2026, from https://www.pure.ed.ac.uk/ws/files/29266196/frp0477_krishnan.pdf

Miller, M. D. et al. (2009). Measurement and assessment in teaching (10th ed.). Upper Saddle River, NJ: Pearson.

Rao, C. et al. (2016). Item analysis of multiple choice questions: Assessing an assessment tool in medical students. International Journal of Educational and Psychological Researches, 2(4), 201-204.

Rezigalla, A. A. et al. (2024). Item analysis: the impact of distractor efficiency on the difficulty index and discrimination power of multiple-choice items. BMC Med Educ, 24(445), 1-7.

Tarrant, M. et al. (2009) An assessment of functioning and non-functioning distractors in multiple-choice questions: a descriptive analysis. BMC Med Educ, 9(40), 1-8.

Downloads

Published

2026-02-26

How to Cite

Khaninphasut, P. (2026). ITEM ANALYSIS OF MULTIPLE-CHOICE QUESTIONS UNDER CLASSICAL TEST THEORY. Journal of Interdisciplinary Innovation Review, 9(1), 373–384. retrieved from https://so04.tci-thaijo.org/index.php/jidir/article/view/286704