Interactive Uncertainty-Aware Explanations – Design Science Approach
Bachelorarbeit, Masterarbeit
Overview
AI-augmented decision-making systems are increasingly adopted in high-stakes fields such as healthcare, where decisions often involve complex trade-offs and uncertainty. However, many AI systems fail to effectively communicate this uncertainty to users, which can negatively impact trust, reliance, and decision quality. Designing uncertainty-aware explanations that dynamically interact with users offers an opportunity to bridge this gap and support better decision-making.
This thesis aims to adopt a Design Science Research (DSR) approach to develop and evaluate an interactive prototype for uncertainty-aware explanations in AI systems. The prototype will focus on how explanations can adapt to user preferences, cognitive load, and decision stakes. The study will involve iterative artifact development and evaluation with real or simulated users in a healthcare decision-making context.
Application
Please send your CV, transcript, and a short motivational letter detailing your interest in the topic and prior experience with design or human-computer interaction research to jaki@tu-… . A strong interest in design methodologies and user experience research is essential. Programming skills (e.g., Python, Figma) and familiarity with human-AI interaction are advantageous.
Literature
- Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2018). ‘It's reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1-14.
- Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263-292.
- Gregor, S., & Hevner, A. R. (2013). Positioning and presenting design science research for maximum impact. MIS Quarterly, 37(2), 337-355.
- Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312.
- Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608.
- Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing design practices for explainable AI user experiences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.