Publications

Fair class balancing: Enhancing model fairness without observing sensitive attributes

Abstract

Machine learning models are at the foundation of modern society. Accounts of unfair models penalizing subgroups of a population have been reported in domains including law enforcement, job screening, etc. Unfairness can spur from biases in the training data, as well as from class imbalance, i.e., when a sensitive group's data is not sufficiently represented. Under such settings, balancing techniques are commonly used to achieve better prediction performance, but their effects on model fairness are largely unknown. In this paper, we first illustrate the extent to which common balancing techniques exacerbate unfairness in real-world data. Then, we propose a new method, called fair class balancing, that allows to enhance model fairness without using any information about sensitive attributes. We show that our method can achieve accurate prediction performance while concurrently improving fairness.

Date
October 19, 2020
Authors
Shen Yan, Hsien-te Kao, Emilio Ferrara
Book
Proceedings of the 29th ACM International Conference on Information & Knowledge Management
Pages
1715-1724