Collegium Helveticum
sovranofrancesco_project
Fellow Project 2024–2025

Explaining the Unexplainable
Bridging the Gap Between Generative AI and Theory-Driven Explanations

This project aims to uncover how generative artificial intelligence (GenAI), such as ChatGPT, produces explanations in educational settings. Using grounded theory, the project will systematically analyze GenAI's explanation processes within an educational context, across disciplines including programming, law, and history. The goal is to develop an empirical-derived computational theory of explanations that will inform the creation of next-generation transparent explanatory AI systems. This endeavor aims to make AI a trustworthy educational tool, aligning with UNESCO's vision for AI in education. Despite GenAI technology's effectiveness in producing explanations, the underlying rationale remains largely unknown. Opening this "black-box" is an ongoing challenge, and current tools are insufficient. Existing explainable AI (XAI) algorithms generally fail to explain large natural language generation models effectively. This project will explore GenAI's explanation generation for historical, legal, and technological concepts by integrating empirical methods and philosophical theories with existing XAI technology.