The Hermeneutic Loop: Explainable AI (XAI) as a Partner in Interpretive HASS Research
Main Article Content
Abstract
This paper presented a new method called the “Hermeneutic Loop” for using Explainable Artificial Intelligence (XAI) in humanities and social science research. The goal was to solve a major problem: complex AI often acts as a “black box,” making it challenging for researchers to understand how it reaches its results. This lack of transparency clashes with the core principles of fields that rely on deep interpretation. Our framework transformed XAI from a simple technical tool into a collaborative partner. It created a structured conversation between the researcher and the AI model, making the process of interpretation clearer and more rigorous. We demonstrated this method with three real-world examples. First, in literary studies, we used a language model (BERT) and a technique called Integrated Gradients on 500 British novels. This helped us visually trace how the meaning of the word “virtue” changed from a social concept to a psychological one over a century. Second, in historical research, we analyzed letters from the American Civil War. When a standard sentiment-analysis model failed, we used a method called LIME to understand its mistakes. This process revealed unique 19th-century ways of expressing emotion that the AI initially had missed. Finally, in a study of climate change debates on Twitter, we used “attention visualization” to see how different groups used the same words to build distinct political arguments. The Hermeneutic Loop improved our research, forcing us to question the AI’s output, which led to more reliable and insightful conclusions. It ensured that the computer’s power served human understanding. This approach provides a necessary bridge, allowing scholars to use advanced AI while staying true to the fundamental values of evidence-based interpretation.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Views and opinions appearing in the Journal it is the responsibility of the author of the article, and does not constitute the view and responsibility of the editorial team.
References
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
Bail, C. A. (2014). The cultural environment: Measuring culture with big data. Theory and Society, 43(3-4), 465-482. https://doi.org/10.1007/s11186-014-9216-5
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512
Charmaz, K. C. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Thousand Oaks, CA: Sage.
Dobson, J. E. (2019). Critical digital humanities: The search for a methodology. Illinois: University of Illinois Press. https://doi.org/10.5622/illinois/9780252042270.001.0001
Drucker, J. (2021). The digital humanities coursebook: An introduction to digital methods for research and scholarship. London: Routledge.
Fish, S. (2012, January 9). The digital humanities and the transcending of mortality. The New York Times. https://archive.nytimes.com/opinionator.blogs.nytimes.com/2012/01/09/the-digital-humanities-and-the-transcending-of-mortality/
Gadamer, H.-G. (2013). Truth and method. (Revised 2nd ed.). London: Bloomsbury Academic.
Graham, S., Milligan, I., Weingart, S., & Martin, K. (2022). Exploring Big Historical Data: The Historian's Macroscope. (2nd ed.). World Scientific Publishing. https://doi.org/10.1142/12435
Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75-105. https://doi.org/10.2307/25148625
Kokhlikyan, N., Miglani, V., Martin, M., Wang, E., Alsallakh, B., Reynolds, J., Melnikov, A., Kliushkina, N., Araya, C., Yan, S., & Reblitz-Richardson, O. (2020). Captum: A unified and generic model interpretability library for PyTorch. arXiv. https://arxiv.org/abs/2009.07896
Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. arXiv. https://doi.org/10.48550/arXiv.1705.07874
Moretti, F. (2013). Distant reading. London: Verso Books.
Ribeiro, M.T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, August, 13-17,
(pp. 1135-1144). https://doi.org/10.1145/2939672.2939778
Timmermans, S., & Tavory, I. (2012). Theory construction in qualitative research: From grounded theory to abductive analysis. Sociological Theory, 30(3), 167-186. https://doi.org/10.1177/0735275112457914
Underwood, T. (2019). Distant horizons: Digital evidence and literary change. Chicago: The University of Chicago Press.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. arXiv. https://doi.org/10.48550/arXiv.1706.03762
Wevers, M., & Smits, T. (2020). The visual digital turn: Using neural networks to study historical images. Digital Scholarship in the Humanities, 35(1), 194-207. https://doi.org/10.1093/llc/fqy085
Yin, R. K. (2018). Case study research and applications: Design and methods. (6th ed.). Thousand Oaks, CA: Sage.