Illuminating the Black Box: The Transformative Role of Explainable AI Across Humanities, Social Sciences, Arts, and Engineering

Main Article Content

Abdul Jabbar Perumbalath
Mohammadamin Dadras
Lim Chong Ewe

Abstract

The field of Explainable AI (XAI) has risen to meet this need, creating tools that help us understand how an AI reaches its conclusions. This paper looks at how these explanation tools are being used far beyond just computer science. For instance, historians use them to discover fresh patterns in ancient texts, and engineers are using them to make new systems safer before they are even built. The study explores how XAI serves as a collaborative tool in the humanities, social sciences, artistic creation, and human-centered design. In brief, these explanation tools build a crucial bridge between human experts and powerful technology. They build trust by making the AI's "thinking" clear for everyone to see. This clarity allows people to question its results, check for fairness, and use it to improve their own creative and investigative work. The real-world examples in this paper show that making AI understandable is the key to a successful and responsible partnership between people and machines in almost any field.

Article Details

How to Cite
Perumbalath, A. J., Dadras, M., & Ewe, L. C. (2026). Illuminating the Black Box: The Transformative Role of Explainable AI Across Humanities, Social Sciences, Arts, and Engineering. Journal of Multidisciplinary in Humanities and Social Sciences, 9(1), 125–137. retrieved from https://so04.tci-thaijo.org/index.php/jmhs1_s/article/view/283153
Section
Research Articles

References

Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052

Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human-AI interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 3, 1–13. https://doi.org/10.1145/3290605.3300233

Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

Bansal, G., Wu, T., Zhou, J., Fok, R., Nushi, B., Kamar, E., Ribeiro, M. T., & Weld, D. (2019). Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7(1), 2–11. https://doi.org/10.1609/hcomp.v7i1.5285

Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). 1-12. https://doi.org/10.1177/2053951715622512

Cetinic, E., & She, J. (2022). Understanding and creating art with AI: Review and outlook. ACM Transactions on Multimedia Computing, Communications, and Applications, 18(2), 1–22. https://doi.org/10.1145/3475799

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin's Press.

Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 80-89. https://doi.org/10.1109/dsaa.2018.00018

Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a "right to explanation". AI Magazine, 38(3), 50-57. https://doi.org/10.1609/aimag.V38i3.2741

Hertzmann, A. (2020). Computers do not make art, people do. Communications of the ACM, 63(5), 45-48. https://doi.org/10.1145/3347092

Hong, J., & Curran, N. M. (2019). Artificial intelligence, artists, and art: Attitudes toward artwork produced by humans vs. artificial intelligence. ACM Transactions on Multimedia Computing, Communications, and Applications, 15(2s), 1-16. https://doi.org/10.1145/3326337

Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14-29. https://doi.org/10.1080/1369118X.2016.1154087

Krishnan, M. (2020). Against interpretability: A critical examination of the interpretability problem in machine learning. Philosophy & Technology, 33(3), 487-502. https://doi.org/10.1007/s13347-019-00372-9

Liao, Q. V., & Varshney, K. R. (2021). Human-centered explainable AI (XAI): From algorithms to user experiences. arXiv preprint. https://doi.org/10.48550/arXiv.2110.10790

Lipton, Z. C. (2018). The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Communications of the ACM, 61(10), 36-43. https://doi.org/10.1145/3233231

Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In NIPS'17: Proceedings of the 31st International Conference on Neural Information Processing Systems, 4768-4777. https://dl.acm.org/doi/10.5555/3295222.3295230

Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38. https://doi.org/10.1016/j.artint.2018.07.007

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown Publishing Group.

Reyes, M., Meier, R., Pereira, S., Silva, C. A., Dahlweid, F., von Tengg-Kobligk, H., Summers, R. M., & Wiest, R. (2020). On the interpretability of artificial intelligence in radiology: Challenges and opportunities. Radiology: Artificial Intelligence, 2(3), e190043. https://doi.org/10.1148/ryai.2020190043

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135-1144. https://doi.org/10.1145/2939672.2939778

Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215. https://doi.org/10.1038/s42256-019-0048-x

Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2). https://doi.org/10.1177/2053951717738104

Shortliffe, E. H., & Buchanan, B. G. (1975). A model of inexact reasoning in medicine. Mathematical Biosciences, 23(3-4), 351-379. https://doi.org/10.1016/0025-5564(75)90047-4

Slack, D., Hilgard, S., Jia, E., Singh, S., & Lakkaraju, H. (2020). Fooling LIME and SHAP: Adversarial attacks on post hoc explanation methods. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 180-186. https://doi.org/10.1145/3375627.3375830

Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25, 44-56. https://doi.org/10.1038/s41591-018-0300-7

Wachter, S., Mittelstadt, B., & Russell, C. (2018). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31(2), 841-887. https://doi.org/10.2139/ssrn.3063289

Yin, R. K. (2018). Case study research and applications: Design and methods. (6th ed.). California: Sage Publications.