ITEM ANALYSIS OF MULTIPLE-CHOICE QUESTIONS UNDER CLASSICAL TEST THEORY
Keywords:
Item Analysis, Item Difficulty, Item Discrimination, Multiple-Choice Questions, Classical Test TheoryAbstract
This academic article aims to present principles and practical guidelines for analyzing the quality of multiple-choice questions based on Classical Test Theory. The quality of items is a important factor in the validity of assessing students' true abilities. Item analysis represents a statistical process that systematically examines the effectiveness of items through three essential indices. First, the item difficulty index reflects the proportion of examinees who answer the item correctly. Second, the item discrimination index demonstrates the ability of a item to differentiate between high-ability and low-ability examinees. Third, distractor efficiency analyzes the capability of distractors to attract low-ability examinees. Moreover, the study systematically presents the formulas, calculation procedures, and interpretive guidelines for the indices. It presents practical approaches that teachers and researchers can apply using basic software programs without requiring sophisticated statistical packages. Understanding the principles of item analysis and improving items to meet quality standards will enhance the accuracy, validity, and fairness of learning achievement.
References
ศิริชัย กาญจนวาสี. (2556). ทฤษฎีการทดสอบแบบดั้งเดิม (พิมพ์ครั้งที่ 7). กรุงเทพฯ: สำนักพิมพ์จุฬาลงกรณ์มหาวิทยาลัย.
Allen, M. J. & Yen, W. M. (1979). Introduction to Measurement Theory. Monterey, CA.: Brooks/Cole.
Crocker, L. & Algina, J. (1986). Introduction to classical and modern test theory. New York, NY: Holt, Rinehart and Winston.
Ebel, R. L. & Frisbie, D. A. (1991). Essentials of educational measurement (5th ed.). Englewood Cliffs, NJ: Prentice-Hall.
Gierl, M. J. & Bulut, O. (2017). Using distractor analysis to evaluate item quality. In M. J. Gierl & O. Bulut (Eds.), Handbook of diagnostic classification models. (pp. 81–108). Cham: Springer.
Gronlund, N. E. & Linn, R. L. (1990). Measurement and evaluation in teaching (6th ed.). New York, NY: Macmillan.
Haladyna, T. M. et al. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15(3), 309-333.
Kehoe, J. (1995). Basic item analysis for multiple-choice tests. Practical Assessment, Research & Evaluation, 4(10), 1-3.
Kelly, T. L. (1939). The Selection of Upper and Lower Groups for the Validation of Test Items. Journal of Educational Psychology, 30, 17-24.
Krishnan, D. R. (2013). Statistical estimation techniques requiring nearly normal sampling distributions. Retrieved February 1, 2026, from https://www.pure.ed.ac.uk/ws/files/29266196/frp0477_krishnan.pdf
Miller, M. D. et al. (2009). Measurement and assessment in teaching (10th ed.). Upper Saddle River, NJ: Pearson.
Rao, C. et al. (2016). Item analysis of multiple choice questions: Assessing an assessment tool in medical students. International Journal of Educational and Psychological Researches, 2(4), 201-204.
Rezigalla, A. A. et al. (2024). Item analysis: the impact of distractor efficiency on the difficulty index and discrimination power of multiple-choice items. BMC Med Educ, 24(445), 1-7.
Tarrant, M. et al. (2009) An assessment of functioning and non-functioning distractors in multiple-choice questions: a descriptive analysis. BMC Med Educ, 9(40), 1-8.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Journal of Interdisciplinary Innovation Review

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
In order to conform the copyright law, all article authors must sign the consignment agreement to transfer the copyright to the Journal including the finally revised original articles. Besides, the article authors must declare that the articles will be printed in only the Journal of interdisciplinary Innovation Review. If there are pictures, tables or contents that were printed before, the article authors must receive permission from the authors in writing and show the evidence to the editor before the article is printed. If it does not conform to the set criteria, the editor will remove the article from the Journal without any exceptions.


