TLDR: EUCA provides design suggestions from end-users’ perspective on explanation forms and goals.
The ability to explain decisions to end-users is a necessity to deploy AI as critical decision support. Yet making AI explainable to non-technical end-users is a relatively ignored and challenging problem. To bridge the gap, we first identify twelve end-user- friendly explanatory forms that do not require technical knowledge to comprehend, including feature-, example-, and rule-based explanations. We then instantiate the explanatory forms as prototyping cards in four AI-assisted critical decision-making tasks, and conduct a user study to co-design low-fidelity prototypes with 32 layperson participants. The results confirm the relevance of using explanatory forms as building blocks of explanations, and identify their proprieties — pros, cons, applicable explanation goals, and design implications. The explanatory forms, their proprieties, and prototyping supports (including a suggested prototyping process, design templates and exemplars, and associated algorithms to actualize explanatory forms) constitute the End-User-Centered explainable AI framework EUCA, and is available at http://weinajin.github.io/end-user-xai. It serves as a practical prototyping toolkit for HCI/AI practitioners and researchers to understand user requirements and build end-user-centered explainable AI.
IEEE VIS poster
Bridging AI Developers and End Users: an End-User-Centred Explainable AI Taxonomy and Visual Vocabularies
Jin, Weina, Carpendale, Sheelagh, Hamarneh, Ghassan, and Gromala, Diane
TLDR: We conducted a literature review and summarize the end-user-friendly explanation forms as visual vocabularies. This is The precursor of the EUCA framework.
Researchers in the re-emerging field of explainable/interpretable artificial intelligence (XAI) have not paid enough attention to the end users of AI, who may be lay persons or domain experts such as doctors, drivers, and judges. We took an end-user-centric lens and conducted a literature review of 59 technique papers on XAI algorithms and/or visualizations. We grouped the existing explanatory forms in the literature into the end-user-friendly XAI taxonomy. It consists of three forms that explain AI’s decisions: feature attribute , instance, and decision rules/trees. We also analyzed the visual representations for each explanatory form, and summarized them as the XAI visual vocabularies. Our work is a synergy of XAI algorithm, visualization, and user-centred design. It provides a practical toolkit for AI developers to define the explanation problem from a user-centred perspective, and expand the visualization space of explanations to develop more end-user-friendly XAI systems.