- Joe Sventek
While much attention has been recently given to the social, ethical and political implications of fairness in artificial intelligence methods, practices and technologies, not much has been said about the formal (conceptual, mathematical, algorithmic) definitions of fairness and their conceptual adequacy. It is not clear, for example, whether group parity/equity captures equality amongst individuals and/or which one is more desirable in a given algorithmic function. It is not clear, either, as in the case of reinforcement learning, whether inequity aversion as a coded function should be a value function of an artificial environment or a policy of an artificial agent. If it is a policy, then the agent is not really “learning” fairness but merely obeying a constraint. If it is a value function, then the agent may not even know what fairness is. There are also conceptual challenges facing the distinction between fair distribution of finite goods, indivisible goods and/or infinitely divisible goods. Data and/or the products of data science (clean data sets, predictive algorithms, analytical methods, open software, etc.) may be in the latter category. If so, what it means to have access and/or to distribute data and data science products fairly could be a very different problem from the ones we are used to when dealing with the fair distribution of finite goods like food. And finally, it is not clear that mathematical “equality for all” as a concept is compatible with a fair distribution that follows the principle of “give to each what they deserve”. These are important question concerning the concept of fairness in data science and it is important that we look at the formal elements of our implementations of the concept of fairness in order to elucidate their adequacy, their desirability, and whether or not a given method, practice or technology in artificial intelligence is or can be made to be fair. In this talk of a working paper, I will offer a taxonomy of formal conceptual challenges in both philosophy and computation whose considerations should accompany our other broader socio-political concerns associated with fairness in data science technologies.
Ramón is an Assistant Professor of Philosophy and Data Ethics at the University of Oregon. He researches the epistemic and ethical implications of the use of computational methods and technologies in science and society. He has published papers on the epistemic implications of big data in science as well as on the challenges of justifying our ubiquitous reliance on computer simulations for scientific inquiry. He has also written about the challenges that opaque computational methods such as artificial intelligence, machine learning and big data pose to democratic processes.