Test-Takers' Performances on and Perceptions of Two Different Modes of Online Speaking Tests

Main Article Content

Wiramon Sangsuwan
Anchana Rukthong

Abstract

A direct test of English speaking is important to evaluate what learners can do in real-life situations. However, due to challenges in test administration, especially with a large number of test-takers, a direct speaking test may not be feasible in many contexts and thus indirect tests, such as conversational cloze tests, are mainly used. In response to this problem, this study utilized communication technology to create speaking tests with two different delivery modes: Real-Time Interview with a human interviewer (RTI) and Pre-Recorded Video (PRV). The tests were given to a group of 40 first-year university students to complete, followed by a perception questionnaire and a group interview to collect data about test-takers’ perceptions of the tasks. Results showed that the participants performed significantly better on the PRV test tasks and they perceived both tasks positively. The strongest quality of both test tasks, as perceived by the participants, was authenticity. While the RTI tasks were perceived to significantly have more impact and interactiveness than the PRV tasks, the test-takers shared in the interview that they felt more comfortable and less anxious while completing the PRV tasks.

Article Details

How to Cite
Sangsuwan, W., & Rukthong, A. (2023). Test-Takers’ Performances on and Perceptions of Two Different Modes of Online Speaking Tests. LEARN Journal: Language Education and Acquisition Research Network, 16(2), 168–183. Retrieved from https://so04.tci-thaijo.org/index.php/LEARN/article/view/266940
Section
Research Articles
Author Biographies

Wiramon Sangsuwan, Faculty of Arts, Prince of Songkla University, Thailand

A graduate student of Teaching English as an International Language at Prince of Songkla University, Hat Yai, Thailand. Her research interest encompasses second language assessment.

Anchana Rukthong, Faculty of Arts, Prince of Songkla University, Thailand

An assistant professor at Faculty of Liberal Arts, Prince of Songkla University, Hat Yai, Thailand. Her main research interests are language education, psycholinguistics, second language and foreign language assessment, focusing specifically on assessment of integrated language skills.

References

Allen, D. (2016). Investigating washback to the learner from the IELTS test in the Japanese tertiary context. Language testing in Asia, 6(1), 1-20.

Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests (Vol. 1). Oxford University Press.

Bachman, L. F., Palmer, A. S., & Palmer, A. S. (2010). Language assessment in practice: Developing language assessments and justifying their use in the real world. Oxford University Press.

Bailey, K. M., & Nunan, D. (2005). Practical English language teaching: speaking. McGraw-Hill.

Behforouz, B., Gallema, M. C., Waga, R. M. A., & Al Weshahi, S. (2022). The Journal of Asia TEFL. 19(2), 469-488.

Brooks, L., & Swain, M. (2015). Students’ voices: The challenge of measuring speaking for academic contexts. In B. Spolsky, O. Inbar, & M. Tannenbaum (Eds.), Challenges for language education and policy: Making space for people (pp. 65–80). Routledge.

Brown, H. D., & Abeywickrama, P. (2010). Language assessment: Principles and classroom practices (Vol. 10). Pearson Education.

Cerezo, L., Baralt, M., Suh, B. R., & Leow, R. P. (2014). Does the medium really matter in L2 development? The validity of CALL research designs. Computer Assisted Language Learning, 27(4), 294-310.

Dai, Z. (2011). A study of the reliability of computerized oral proficiency interview. Computer-Assisted Foreign Language Education, 33(2), 45-50.

Ebrahimi, M. R., Toroujeni, S. M. H., & Shahbazi, V. (2019). Score equivalence, gender difference, and testing mode preference in a comparative study between computer-based testing and paper-based testing. International Journal of Emerging Technologies in Learning (Online), 14(7), 128.

Elder, C., Iwashita, N., & McNamara, T. (2002). Estimating the difficulty of oral proficiency tasks: what does the test-taker have to offer?. Language Testing, 19(4), 347-368.

Fan, J. (2014). Chinese test-takers' attitudes towards the Versant English Test: A mixed-methods approach. Language Testing in Asia, 4(1), 1-17.

Galaczi, E., & Taylor, L. (2018). Interactional competence: Conceptualisations, operationalisations, and outstanding questions. Language Assessment Quarterly, 15(3), 219-236.

Ginther, A. (2013). Assessment of speaking. The encyclopedia of applied linguistics, 1.

Han, C. (2019). A generalizability theory study of optimal measurement design for a summative assessment of English/Chinese consecutive interpreting. Language Testing, 36(3), 419-438.

Harding, L. (2014). Communicative language testing: Current issues and future research. Language assessment quarterly, 11(2), 186-197.

Hughes, R. (2010). Materials to develop the speaking skill. English language teaching materials: Theory and practice, 207-224.

Hughes, R., & Reed, B. S. (2016). Teaching and researching speaking. Taylor & Francis.

Hüseyin, Ö. Z., & Özturan, T. (2018). Computer-based and paper-based testing: Does the test administration mode influence the reliability and validity of achievement tests? Journal of Language and Linguistic Studies, 14(1), 67-85.

Luoma, S. (2004). Assessing speaking. Cambridge University Press.

Panjan, S., & Palanukulwong, T. (2016). Thai Learners’ Performance on Listening Test: A Comparison of Paper-based and Web-based Testing. Veridian E-Journal, Silpakorn University (Humanities, Social Sciences and arts), 9(5), 245-257.

Plough, I. (2018). Revisiting the speaking construct: The question of interactional competence. Language Testing, 35(3), 325-329.

Plough, I., Banerjee, J., & Iwashita, N. (2018). Interactional competence: Genie out of the bottle. Language Testing, 35(3), 427-445.

Poonpon, K. (2021). Test-takers' perceptions of design and implementation of an online language testing system at a Thai university during the COVID-19 pandemic. PASAA: Journal of Language Teaching and Learning in Thailand, 62, 1-28.

Qutaishat, R. S., Bataineh, A. M., & Bataineh, A. M. (2014). The effect of performance-based assessment on language accuracy of tenth grade English language students at Mafraq Borough directorate of education. Journal of Education and Practice, 5(15), 97-105.

Sato, T., & Ikeda, N. (2015). Test-taker perception of what test items measure: a potential impact of face validity on student learning. Language Testing in Asia, 5, 1-16.

Stricker, L. J., & Attali, Y. (2010). Test takers’ attitude about the TOEFL iBTtm. ETS Research Report Series, 2010(1), 1-16.

Sundayana, W., Meekaeo, P., Purnawarman, P., & Sukyadi, D. (2018). Washback of English national exams at ninth-grade level in Thailand and Indonesia. Indonesian Journal of Applied Linguistics, 8(1), 167-176.

Underhill, N. (1987). Testing spoken language: A handbook of oral testing techniques. Cambridge University Press.

Weigle, S. C. (2002). Assessing writing. Cambridge University Press.

Weir, C. J. (2005). Language testing and validation. Palgrave McMillan, 10, 9780230514577.

Yao, D. (2020). A comparative study of test-takers' performance on computer-based test and paper-based test across different CEFR levels. English Language Teaching, 13(1), 124-133.

Zhao, H., & Gu, X. (2016). China Accreditation Test for Translators and Interpreters (CATTI): Test review based on the language pairing of English and Chinese. Language Testing, 33(3), 439-446.

Zhou, Y. J. (2012). Test-takers’ affective reactions to a computer-delivered speaking test and their test performance. Working papers in corpus-based linguistics and language education, (9), 295-310.

Zhou, Y., & Yoshitomi, A. (2019). Test-taker perception of and test performance on computer-delivered speaking tests: the mediational role of test-taking motivation. Language Testing in Asia, 9, 1-19.