Nurturing Knowledge: A Virtue Epistemology Approch to Explainable AI

FacebookLinkedin

Laura CandiottoJakub GrowiecPhilipp KellmeyerIthai RabinowitchMichael Livermore, "Nurturing Knowledge: A Virtue Epistemology Approach to Explainable AI" in Philipp Hacker (ed.)Oxford Intersections: AI in Society (Oxford, online edn), Oxford Academic, 2025

DOI: https://doi.org/10.1093/9780198945215.003.0176

Abstract

 AI technologies, particularly deep neural networks and machine learning models, have become increasingly integrated into knowledge production across diverse scientific domains, raising critical concerns about explainability and interpretability. Different disciplines and contexts require fundamentally different types of explanations, making universal approaches to explainable AI inadequate. Virtue epistemology offers a promising framework for addressing these challenges by focusing on how AI systems cultivate or undermine epistemic virtues within specific knowledge communities. Rather than seeking explanations in abstract terms, virtue epistemology emphasizes epistemic abilities and character traits as they manifest within particular epistemic cultures. Case studies from social science, neuroscience, medicine, and the humanities reveal that meaningful progress in explainable AI requires aligning computational reasoning with the cultivation of epistemic virtues and the mitigation of epistemic vices that characterize each scientific community’s specialized knowledge practices and norms.

This article is the result of the fourth Intercontinental Academia (ICA4), organized by the Paris Institute for Advanced Studies, which made a significant contribution. It is part of the ongoing transdisciplinary work carried out at the Paris IAS on artificial intelligence.

35745
2025
Laura Condiotto