Measuring and Mitigating Bias for Tabular Datasets with Multiple Protected Attributes
full text: | |
author/s: | Manh Khoi Duong, Stefan Conrad |
type: | Inproceedings |
booktitle: | Proceedings of the 2nd Workshop on Fairness and Bias in AI (AEQUITAS 2024) co-located with 27th European Conference on Artificial Intelligence (ECAI 2024) Santiago de Compostela, Spain |
publisher: | CEUR Workshop Proceedings |
month: | August |
year: | 2024 |
location: | Santiago de Compostela, Spain |
Motivated by the recital (67) of the current corrigendum of the AI Act in the European Union, we propose and present measures and mitigation strategies for discrimination in tabular datasets. We specifically focus on datasets that contain multiple protected attributes, such as nationality, age, and sex. This makes measuring and mitigating bias more challenging, as many existing methods are designed for a single protected attribute. This paper comes with a twofold contribution: Firstly, new discrimination measures are introduced. These measures are categorized in our framework along with existing ones, guiding researchers and practitioners in choosing the right measure to assess the fairness of the underlying dataset. Secondly, a novel application of an existing bias mitigation method, FairDo, is presented. We show that this strategy can mitigate any type of discrimination, including intersectional discrimination, by transforming the dataset. By conducting experiments on real-world datasets (Adult, Bank, Compas), we demonstrate that de-biasing datasets with multiple protected attributes is achievable. Further, the transformed fair datasets do not compromise any of the tested machine learning models' performances significantly when trained on these datasets compared to the original datasets. Discrimination was reduced by up to 83% in our experimentation. For most experiments, the disparity between protected groups was reduced by at least 7% and 27% on average. Generally, the findings show that the mitigation strategy used is effective, and this study contributes to the ongoing discussion on the implementation of the European Union's AI Act.