mobile icon
Project

Egocentric Biases Meet Biased Algorithms

WorkgroupEveryday Media
Duration06/2023–06/2026
FundingDeutsche Forschungsgemeinschaft (DFG) / German Research Foundation
Project description

Different individuals favor different notions of fairness. This PhD project therefore delves into how egocentric biases affect judgments of algorithmic fairness, an issue that often goes unnoticed in the discussion of AI discrimination cases. By understanding the influence of biases on fairness assessments, this project offers insights into improving AI decision-making in diverse domains.


Fairness is a multi-faceted construct with various definitions that are generally influenced by the domain in which they are investigated. Given that perceived fairness is a crucial objective in domains such as education, employment, or bank lending, it becomes especially important to examine it when these areas are being transformed by AI-assisted decision-making systems. The opposite of fair decision-making can be understood as discriminatory decision-making which is typically in direct contrast to local anti-discrimination laws in most countries. Even though companies and institutions try to prevent categorical group discrimination through their applied AI applications occasional cases are published where such discrimination occurs. However, the issue of group discrimination by AIs nevertheless seems to be either not known or not very salient among participants in typical algorithm acceptance studies. Awareness for the matter is therefore crucial for effecting change. But even if awareness is created different demographic groups might perceive the AIs decision making as less discriminatory due to their belief that they are not negatively affected by it. This raises the question of the extent to which egocentric biases also influence the development and deployment of AI-assisted decision-making systems.


Therefore, in a series of experiments, our objective is to uncover the frequently underestimated phenomenon of group discrimination by AI among the general population. We aim to illustrate that self-centered biases, such as the tendency to favor outcomes beneficial for one self, can distort people's perception of group discrimination. Additionally, we seek to understand how these biases impact preferences in the process of developing AI solutions designed to address group discrimination.


Consequently, this project seeks to comprehensively understand the cross-domain implications of fairness in AI, with the ultimate goal of promoting more ethical and equitable technological advancements.

contact

Nico Ehrhardt Nico Ehrhardt
Tel.: +49 7071 979-312

Project team

Prof. Dr. Sonja Utz