A practical prototyping tool to design explainable AI for non-technical end-users.
End-User-Friendly Explanatory Forms |
---|
• Feature-based explanation |
• Example-based explanation |
• Rule-based explanation |
• Contextual information |
Explanation Goals |
EUCA Dataset |
Associated Paper: EUCA: the End-User-Centered Explainable AI Framework
Authors: Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, Ghassan Hamarneh
EUCA dataset is for modelling personalized or interactive explainable AI. It contains 309 data points of 32 end-users’ preferences on 12 forms of explanation (including feature-, example-, and rule-based explanations). The data were collected from a user study on 32 layperson participants in the Greater Vancouver city area in 2019-2020. In the user study, the participants (P01-P32) were presented with AI-assisted critical tasks on house price prediction, health status prediction, purchasing a self-driving car, and studying for a biological exam [1]. Within each task and for its given explanation goal [2], the participants selected and rank the explanation forms [3] that they saw the most suitable.
Column description:
It contains the participants demographics, including their age, gender, educational background, and their knowledge and attitudes toward AI.
EUCA dataset zip file for download
There are four tasks. Task label and their corresponding task titles are: house - Selling your house car - Buying an autonomous driving vehicle health - Personal health decision bird - Learning bird species
Please refer to EUCA quantatative data analysis report for the storyboard of the tasks and explanation goals presented in the user study.
End-users may have different goals/purposes to check an explanation from AI. The EUCA dataset includes the following 11 explanation goals, with its [label] in the dataset, full name and description
[safe] Ensure safety: users need to ensure safety of the decision consequences.
[bias] - Detect bias: users need to ensure the decision is impartial and unbiased.
[unexpect] Resolve disagreement with AI: the AI prediction is unexpected and there are disagreements between users and AI.
[expected] - Expected: the AI’s prediction is expected and aligns with users’ expectations.
[differentiate] Differentiate similar instances: due to the consequences of wrong decisions, users sometimes need to discern similar instances or outcomes. For example, a doctor differentiates whether the diagnosis is a benign or malignant tumor.
[learning] Learn: users need to gain knowledge, improve their problem-solving skills, and discover new knowledge
[control] Improve: users seek causal factors to control and improve the predicted outcome.
[communicate] Communicate with stakeholders: many critical decision-making processes involve multiple stakeholders, and users need to discuss the decision with them.
[report] Generate reports: users need to utilize the explanations to perform particular tasks such as report production. For example, a radiologist generates a medical report on a patient’s X-ray image.
The following 12 explanation forms are end-user-friendly, i.e.: no technical knowledge is required for the end-user to interpret the explanation.
Note: occasionally there is a wild card, which means the participant draw the card by themselves. It is indicated as ‘wc’.
For visual examples of each explanation form card, please refer to the Explanatory_form_labels.pdf document.
Link to the details on users’ requirements on different explanation forms
@article{jin2021euca,
title={EUCA: the End-User-Centered Explainable AI Framework},
author={Weina Jin and Jianyu Fan and Diane Gromala and Philippe Pasquier and Ghassan Hamarneh},
year={2021},
eprint={2102.02437},
archivePrefix={arXiv},
primaryClass={cs.HC}
}