Effects of XAI on perception, trust and acceptance

This talk delves into the influence of Explainable Artificial Intelligence (XAI) on human cognition, trust, and acceptance of AI-driven systems. Through a review of empirical studies, this presentation illuminates how the provision of intelligible explanations shapes individuals’ perception of AI-generated outputs. By synthesizing findings from diverse contexts, we uncover critical insights into the mechanisms underlying the cognitive processing of explanations, shedding light on the factors that modulate trust and acceptance levels. This discourse aims to inspire a broader conversation on designing XAI systems that not only excel in performance but also empower users through comprehensible and trust-building explanations.

References

In this series