Can We Trust Fair-AI?
DOI:
https://doi.org/10.1609/aaai.v37i13.26798Keywords:
Fair Machine Learning, Fairness Metrics, Yule's EffectAbstract
There is a fast-growing literature in addressing the fairness of AI models (fair-AI), with a continuous stream of new conceptual frameworks, methods, and tools. How much can we trust them? How much do they actually impact society? We take a critical focus on fair-AI and survey issues, simplifications, and mistakes that researchers and practitioners often underestimate, which in turn can undermine the trust on fair-AI and limit its contribution to society. In particular, we discuss the hyper-focus on fairness metrics and on optimizing their average performances. We instantiate this observation by discussing the Yule's effect of fair-AI tools: being fair on average does not imply being fair in contexts that matter. We conclude that the use of fair-AI methods should be complemented with the design, development, and verification practices that are commonly summarized under the umbrella of trustworthy AI.Downloads
Published
2024-07-15
How to Cite
Ruggieri, S., Alvarez, J. M., Pugnana, A., State, L., & Turini, F. (2024). Can We Trust Fair-AI?. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15421-15430. https://doi.org/10.1609/aaai.v37i13.26798
Issue
Section
Senior Member Presentation: Summary Papers