People are not coins : morally distinct types of predictions necessitate different fairness constraints
Keywords
Fairness metricsDiscrimination
Decision-making
Artificial intelligence
Fair prediction
Moral principle
006: Spezielle Computerverfahren
170: Ethik
Full record
Show full item recordOnline Access
https://doi.org/10.1145/3531146.3534643https://hdl.handle.net/11475/29380
https://digitalcollection.zhaw.ch/handle/11475/29380
Abstract
In a recent paper [1], Brian Hedden has argued that most of the group fairness constraints discussed in the machine learning literature are not necessary conditions for the fairness of predictions, and hence that there are no genuine fairness metrics. This is proven by discussing a special case of a fair prediction. In our paper, we show that Hedden's argument does not hold for the most common kind of predictions used in data science, which are about people and based on data from similar people; we call these “human-group-based practices.” We argue that there is a morally salient distinction between human-group-based practices and those that are based on data of only one person, which we call “human-individual-based practices.” Thus, what may be a necessary condition for the fairness of human-group-based practices may not be a necessary condition for the fairness of human-individual-based practices, on which Hedden's argument is based. Accordingly, the group fairness metrics discussed in the machine learning literature may still be relevant for most applications of prediction-based decision making.Date
2023-12-15Type
Konferenz: PaperIdentifier
oai:digitalcollection.zhaw.ch:11475/29380https://doi.org/10.1145/3531146.3534643
info:doi/10.1145/3531146.3534643
https://hdl.handle.net/11475/29380
https://digitalcollection.zhaw.ch/handle/11475/29380
info:hdl/11475/29380
urn:isbn:9781450393522