News
31 Jul 2025
Human-AI collaboration requires expertise
How do people work effectively with artificial intelligence - and what role does their own expertise play in this? Dr Fritz Becker investigated this question in his dissertation, which he successfully completed with his defence at the end of July.
At the centre of the work is a process model that describes how human observers assess the competence of an AI system: They do this by attempting to mentally simulate its decisions. Three empirical studies investigate (1) the conditions under which people can predict the behaviour of an agent, (2) how accurately people with different levels of experience assess the expertise of the agent, and (3) how trust in the agents changes over time - influenced by expectations and direct experience. The results clearly show that observers are only able to realistically assess the AI's decisions if they have at least the same task-relevant knowledge as the AI. If this expertise is lacking, misjudgements about the performance and trustworthiness of the AI often occur. The dynamics of trust are also interesting: although this is initially based on the reputation of the AI agent, it quickly adapts to the observed performance.
Dr Fritz Becker's work makes an important contribution to the understanding of cognitive processes in human-AI interactions and provides practice-relevant impulses for the design of adaptive and transparent AI systems - especially in complex fields of application such as education or vocational training, where the correct handling of trust and competence assessments is crucial.