Zurück zur Übersicht

Towards Fairness and Privacy: A Novel Data Pre-processing Optimization Framework for Non-binary Protected Attributes

full text: PDF
author/s: Manh Khoi Duong, Stefan Conrad
editor:Diana Benavides Prado, Sarah Monazam Erfani, Philippe Fournier-Viger, Yee Ling Boo, Yun Sing Koh
booktitle:The 21th Australasian Data Science and Machine Learning Conference (AusDM’23)
publisher:Springer Nature
address:Auckland, New Zealand

The reason behind the unfair outcomes of AI is often rooted in biased datasets. Therefore, this work presents a framework for addressing fairness by debiasing datasets containing a (non-)binary protected attribute. The framework proposes a combinatorial optimization problem where heuristics such as genetic algorithms can be used to solve for the stated fairness objectives. The framework addresses this by finding a data subset that minimizes a certain discrimination measure. Depending on a user-defined setting, the framework enables different use cases, such as data removal, the addition of synthetic data, or exclusive use of synthetic data. The exclusive use of synthetic data in particular enhances the framework’s ability to preserve privacy while optimizing for fairness. In a comprehensive evaluation, we demonstrate that under our framework, genetic algorithms can effectively yield fairer datasets compared to the original data. In contrast to prior work, the framework exhibits a high degree of flexibility as it is metric- and task-agnostic, can be applied to both binary or non-binary protected attributes, and demonstrates efficient runtime.

Heinrich Heine Universität

Datenbanken und Informationssysteme


Prof. Dr. Stefan Conrad

Universitätsstr. 1
40225 Düsseldorf
Gebäude: 25.12
Etage/Raum: 02.24
Tel.: +49 211 81-14088


Lisa Lorenz

Universitätsstr. 1
40225 Düsseldorf
Gebäude: 25.12
Etage/Raum: 02.22
Tel.: +49 211 81-11312
Verantwortlich für den Inhalt:  E-Mail senden Datenbanken & Informationssysteme