Vocabulary Size of Thai Graduate Students in Different Disciplines and Their Opinions of Its Influences on the Use of AI in English Language Learning
Main Article Content
Abstract
This study investigates the vocabulary size of Thai graduate students across science and non-science disciplines, alongside their opinions on the effects of vocabulary size on their use of AI tools in their English language learning. A total of 217 students from a public Thai university completed the Updated Vocabulary Levels Test (Webb et al., 2017) and engaged in semi-structured interviews. Quantitative data were analyzed using descriptive statistics, revealing that most students had a low vocabulary size. The Mann-Whitney U test showed significantly higher performance among science students at the 1,000-, 4,000-, and 5,000-word levels. Qualitative data from interviews showed that vocabulary size may not influence most types of AI-assisted language learning activities, their trust in it, or their reliance on AI for language learning, except for the choice of prompt language. These findings offer implications for English vocabulary instruction and highlight the importance of integrating AI-assisted language learning tools with lexical development in higher education contexts.
Article Details
References
Alhaisoni, E., & Alhaysony, M. (2017). An investigation of Saudi EFL university students’ attitudes towards the use of Google Translate. International Journal of English Language Education, 5(1), 72–82. http://dx.doi.org/10.5296/ijele.v5i1.10696
Bardovi-Harlig, K., & Stringer, D. (2010). Variables in second language attrition. Studies in Second Language Acquisition, 32(1), 1–45. https://doi.org/10.1017/S0272263109990246
Cameron, L. (2002). Measuring vocabulary size in English as an additional language. Language Teaching Research, 6(2), 145–173. https://doi.org/10.1191/1362168802lr103oa
Campbell, J. L., Quincy, C., Osserman, J., & Pedersen, O. K. (2013). Coding in-depth semi structured interviews: Problems of unitization and intercoder reliability and agreement. Sociological Methods & Research, 42(3), 294–320. https://doi.org/10.1177/0049124113500475
Chang, T. S., Li, Y., Huang, H. W., & Whitfield, B. (2021). Exploring EFL students' writing performance and their acceptance of AI-based automated writing feedback. In Proceedings of the 2021 2nd International Conference on Education Development and Studies (pp. 31–35). https://doi.org/10.1145/3459043.345906
Charnchairerk, C. (2022). Key defining linguistic features in the writing performance of first-year university students across different language proficiency levels. LEARN Journal: Language Education and Acquisition Research Network, 15(2), 858–891. https://so04.tci-thaijo.org/index.php/LEARN/article/view/259954
Creswell, J. W. (2012). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. Pearson.
Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). Sage.
Dörnyei, Z. (2007). Research methods in applied linguistics: Quantitative, qualitative, and mixed methodologies. Oxford University Press.
EF Education First. (2024). EF English proficiency index 2024: A ranking of 116 countries and regions by English skills. Signum International AG. https://www.ef.com/assetscdn/WIBIwq6RdJvcD9bc8RMd/cefcom-epi-site/reports/2024/ef-epi-2024-english.pdf
Guo, Q., Feng, R., & Hua, Y. (2022). How effectively can EFL students use automated written corrective feedback (AWCF) in research writing? Computer Assisted Language Learning, 35(9), 2312–2331. https://doi.org/10.1080/09588221.2021.1879161
Hirsh, D., & Nation, P. (1992). What vocabulary size is needed to read unsimplified texts for pleasure? Reading in a Foreign Language, 8(2), 689–696. https://doi.org/10.64152/10125/67046
IBM Corp. (2022). IBM SPSS Statistics for Windows (Version 29.0) [Computer software]. IBM Corp.
Karataş, F., Abedi, F. Y., Ozek G. F., Karadeniz, D., & Kuzgun, Y. (2024). Incorporating AI in foreign language education: An investigation into ChatGPT’s effect on foreign language learners. Education and Information Technologies, 29:19343–19366. https://doi.org/10.1007/s10639-024-12574-6
Koltovskaia, S. (2020). Student engagement with automated written corrective feedback (AWCF) provided by Grammarly: A multiple case study. Assessing Writing, 44, Article 100450. https://doi.org/10.1016/j.asw.2020.100450
Krashen, S. (1985). The input hypothesis: Issues and implications. Longman.
Kurniati, E. Y., & Fithriani, R. (2022). Post-graduate students’ perceptions of Quillbot utilization in English academic writing class. Journal of English Language Teaching and Linguistics, 7(3), 437–451. https://dx.doi.org/10.21462/jeltl.v7i3.852
Laufer, B. (1998). The development of passive and active vocabulary: Same or different? Applied Linguistics, 19, 255–271. https://doi.org/10.1093/applin/19.2.255
Lee, Y.-J., Davis, R. O., & Lee, S. O. (2024). University students’ perceptions of artificial intelligence-based tools for English writing courses. Online Journal of Communication and Media Technologies, 14(1), Article e202412. https://doi.org/10.30935/ojcmt/14195
Liu, Y. (2024). Reshaping and transforming of English teaching in higher education in the ChatGPT era: An empirical study based on big data. In 2024 IEEE 24th International Conference on Software Quality, Reliability, and Security Companion (QRS-C) (pp. 1302–1311). IEEE. https://doi.org/10.1109/qrs-c63300.2024.00169
Meniado, J. C. (2023). The impact of ChatGPT on English language teaching, learning, and assessment: A rapid review of literature. Arab World English Journal, 14(4), 3–18. https://dx.doi.org/10.24093/awej/vol14no4.1
Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 13–103). Macmillan.
Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50(9), 741–749.
Mungkonwong, P., & Wudthayagorn, J. (2017). An investigation of vocabulary size of Thai freshmen and its relationship to years of English study. LEARN Journal: Language Education and Acquisition Research Network, 10(2), 1–9. https://so04.tci-thaijo.org/index.php/LEARN/article/view/111681
Nation, I. S. P. (1983). Testing and teaching vocabulary. Guidelines 5, 12–25. https://so04.tci-thaijo.org/index.php/LEARN/article/view/282280/189364
Nation, I. S. P. (2007). The four strands. Innovation in Language Learning and Teaching, 1(1), 1–12. https://doi.org/10.2167/illt039.0
Nation, I. S. P. (2022). Learning vocabulary in another language (3rd ed.). Cambridge University Press.
Nation, I. S. P. & Beglar, D. (2007). A vocabulary size test. The Language Teacher, 31(7), 9–13. https://jalt-publications.org/i/2007-07_31.7
Nation, I. S. P. & Waring, R. (1997). Vocabulary size, text coverage, and word lists. In N. Schmitt & M. McCarthy, (Eds.), Vocabulary: Description, acquisition and pedagogy (pp. 6-19). Cambridge University Press.
Prapphal, K. (2003). English proficiency of Thai learners and directions of English teaching and learning in Thailand. Journal of Studies in the English Language, 1, 6–12. https://so04.tci-thaijo.org/index.php/jsel/article/view/21840
Schmitt, N. (2010). Researching vocabulary: A vocabulary research manual. Palgrave Macmillan. https://doi.org/10.1057/9780230293977
Schmitt, N., Jiang, X., & Grabe, W. (2011). The percentage of words known in a text and reading comprehension. The Modern Language Journal, 95, 26–43. https://doi.org/10.1111/j.1540-4781.2011.01146.x
Smith Jr., E. V. (2004). Evidence for the reliability of measures and validity of measure interpretation: A Rasch measurement perspective. In E. V. Smith Jr. & R. M. Smith (Eds.), Introduction to Rasch measurement: Theory, models and applications (pp. 93–122). JAM Press.
Solak, E. (2024). Revolutionizing language learning: How ChatGPT and AI are changing the way we learn languages. International Journal of Technology in Education, 7(2), 353–372. https://doi.org/10.46328/ijte.732
Srimonkontip, S., & Wiriyakarun, P. (2014), Measuring vocabulary size and vocabulary depth of secondary education students in a Thai-English bilingual school. Journal of Liberal Arts, Ubon Ratchathani University, 10(2), 181–209. https://so03.tci-thaijo.org/index.php/jla_ubu/article/view/94542/73929
Sukying, A. (2023). The role of vocabulary size and depth in predicting postgraduate students' second language writing performance. LEARN Journal: Language Education and Acquisition Research Network, 16(1), 575–603. https://so04.tci-thaijo.org/index.php/LEARN/article/view/263457
Teng, M. F. (2024). “ChatGPT is the companion, not enemies”: EFL learners’ perceptions and experiences in using ChatGPT for feedback in writing. Computers and Education: Artificial Intelligence, 7, Article 100270. https://doi.org/10.1016/j.caeai.2024.100270.
Webb, S. (2013). Depth of vocabulary knowledge. In C. Chappelle (Ed.), Encyclopedia of Applied Linguistics. (pp.1656–1663). Wiley-Blackwell.
Webb, S., Sasao, Y., & Ballance, O. (2017). The updated Vocabulary Levels Test: Developing and validating two new forms of the VLT. ITL-International Journal of Applied Linguistics, 168(1), 33–69. https://doi.org/10.1075/itl.168.1.02web
Zhang, Z. V. (2020). Engaging with automated writing evaluation (AWE) feedback on L2 writing: Student perceptions and revisions. Assessing Writing, 43. https://doi.org/10.1016/j.asw.2019.100439
Zhang, Z. V., & Hyland, K. (2018). Student engagement with teacher and automated feedback on L2 writing. Assessing Writing, 36, 90-102. https://doi.org/10.1016/j.asw.2018.02.004