Uncertainty in AI Decision-Making – Simulation-Based Study
Bachelor thesis, Master thesis
Overview
Simulation-based research provides an effective method for exploring human-AI interaction in controlled environments. When uncertainty-aware explanations are introduced into AI systems, their influence on trust, reliance, and decision quality can vary based on task complexity and user characteristics. Simulations offer a way to systematically test these relationships and generate insights for real-world applications.
This thesis will focus on developing a simulation framework to study the impact of uncertainty-aware explanations on user behavior in AI-augmented decision-making. The simulation will include scenarios with varying levels of uncertainty and decision stakes, allowing for empirical analysis of how these factors influence user outcomes.
Application
Please send your CV, transcript, and a brief motivational statement to jaki@tu-… . A strong interest in programming (e.g., Python, PyTorch, or simulation tools), experimental design, and data analysis is essential.
Literature
- Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1), 32-64.
- Shneiderman, B. (2020). Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI. ACM Transactions on Interactive Intelligent Systems (TiiS), 10(4), 1-31.
- Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT Press.
- Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80.
- Kocielnik, R., Amershi, S., & Bennett, P. N. (2019). Will you accept an imperfect AI? Exploring designs for adjusting end-user expectations of AI systems. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.
- Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.