Reference

Impact of explainable AI on cognitive load: Insights from an empirical study, Lukas-Valentin Herm. ECIS 2023 Research Papers(2023)

Abstract

While the emerging research field of explainable artificial intelligence (XAI) claims to address the lack of explainability in high-performance machine learning models, in practice XAI research targets developers rather than actual end-users. Unsurprisingly, end-users are unwilling to use XAI-based decision support systems. Similarly, there is scarce interdisciplinary research on end-users’ behavior during XAI explanations usage, rendering it unknown how explanations may impact cognitive load and further affect end-user performance. Therefore, we conducted an empirical study with 271 prospective physicians, measuring their cognitive load, task performance, and task time for distinct implementation-independent XAI explanation types using a COVID-19 use case. We found that these explanation types strongly influence end-users’ cognitive load, task performance, and task time. Based on these findings, we classified the explanation types in a mental efficiency matrix, ranking local XAI explanation types as best, and thereby providing recommendations for future applications and implications for sociotechnical XAI research.