UXR | Master Research Project
Does providing transparency on algorithmic decision-making influence perceptions of trust, fairness, understanding and AI use attitudes?
Scroll ↓
TL; DR: Large-scale survey project, analyzing qualitative and quantitative data using statistical software R. I was invited to present a poster abstract of my findings at CogSci 2022.
The Problem
Decision algorithms are becoming increasingly commonplace and are being used in many high-stake decision domains such as treatment allocation and recidivism forecasting. However, public trust in automated decision-making has been eroded as it is now apparent that decision algorithms can inadvertently introduce bias. Machine learning experts are focused on how to minimize bias and improve the fairness of decision artificial intelligence (AI) but little work has been done on ways to improve the user’s perception of trust and fairness.
The Research Question
Does providing transparency into the workings of a decision algorithm impact users’ understanding, feelings of trust, perception of fairness, and attitudes toward AI use?
My role
I was the Lead Researcher on this project, the project was conducted during my master’s thesis while I was a Graduate Student Researcher, and I received support from my supervisor and PhD student.
Methodology
Participants were randomly assigned to one of two high-stake decision scenarios (recidivism or treatment allocation) and then randomly assigned to one of four transparency levels (nothing, opaque, simple explanation, or detailed explanation). They were then asked to provide ratings on the use of attitudes, understanding, trust, and fairness. A total of 573 participants were recruited using Prolific and were redirected to take a survey on Qualtrics for monetary compensation.
Results
A four 4x2 (transparency x context) between-subject ANOVA was run on the statistical program R to explore whether the level of transparency in an explanation had an impact on ratings of understanding, trust, fairness, and attitudes toward use.
Findings
As expected, providing greater transparency into the workings of a decision algorithm leads to increased feelings of trust and improved understanding – but only to an extent. Highly detailed explanations did not increase these feelings any more than a simple explanation. Attitudes towards using AIs in high-stake decision-making and perception of fairness did not change with transparency level, although all four variables are highly correlated.
Research Insights
No Black Boxes: Opaque explanations were perceived as a “black box” by users leading to low ratings of trust and understanding.
Simple Explanations > Detailed Explanations: Simple explanations that were more limited in the information they provided had stronger effects on trust and understanding than detailed explanations.
Type of Explanation Matters: The recidivism AI was seen as less fair than the healthcare AI, this could be because explanations on how the system made decisions instead of why were provided.
Recommendations
Opaque explanations decrease trust and understanding; provide users with some sort of explanation of how an AI makes decisions rather than no explanation at all.
Simple explanations had stronger effects on trust and understanding than detailed explanations; keep AI explanations short and simple for the lay user.
Use the appropriate explanation for the context to improve perceptions of fairness. In the context of treatment prioritization users prefer explanations of how the AI came to a decision while for criminal justice, users prefer explanations of why the AI made that decision.
Real World Impact
These findings show an easy approach to improve the feelings of trust and increase understanding of decision algorithms by providing users with simple textual explanations of how the device works. Less is more. There is no need to give the user overly elaborate graphical explanations that are highly detailed as this does not seem to further enhance the effect as such explanations may be too complicated for the average user.
I submitted my findings for publication at CogSci 2022 in Toronto, Canada. My abstract was accepted and I was invited to present my poster.