skip to main content
10.1145/3287560.3287588acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Public Access

Fairness-Aware Programming

Published: 29 January 2019 Publication History

Abstract

Increasingly, programming tasks involve automating and deploying sensitive decision-making processes that may have adverse impacts on individuals or groups of people. The issue of fairness in automated decision-making has thus become a major problem, attracting interdisciplinary attention. In this work, we aim to make fairness a first-class concern in programming. Specifically, we propose fairness-aware programming, where programmers can state fairness expectations natively in their code, and have a runtime system monitor decision-making and report violations of fairness.
We present a rich and general specification language that allows a programmer to specify a range of fairness definitions from the literature, as well as others. As the decision-making program executes, the runtime maintains statistics on the decisions made and incrementally checks whether the fairness definitions have been violated, reporting such violations to the developer. The advantages of this approach are two fold: (i) Enabling declarative mathematical specifications of fairness in the programming language simplifies the process of checking fairness, as the programmer does not have to write ad hoc code for maintaining statistics. (ii) Compared to existing techniques for checking and ensuring fairness, our approach monitors a decision-making program in the wild, which may be running on a distribution that is unlike the dataset on which a classifier was trained and tested.
We describe an implementation of our proposed methodology as a library in the Python programming language and illustrate its use on case studies from the algorithmic fairness literature.

References

[1]
Aws Albarghouthi, Loris D'Antoni, and Samuel Drews. 2017. Repairing decision-making programs under uncertainty. In International Conference on Computer Aided Verification. Springer, 181--200.
[2]
Aws Albarghouthi, Loris D'Antoni, Samuel Drews, and Aditya V Nori. 2017. FairSquare: probabilistic verification of program fairness. Proceedings of the ACM on Programming Languages 1, OOPSLA (2017), 80.
[3]
Osbert Bastani, Yani Ioannou, Leonidas Lampropoulos, Dimitrios Vytiniotis, Aditya Nori, and Antonio Criminisi. 2016. Measuring neural net robustness with constraints. In Advances in neural information processing systems. 2613--2621.
[4]
Toon Calders and Sicco Verwer. 2010. Three naive Bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery 21, 2 (2010), 277--292.
[5]
Michael R Clarkson and Fred B Schneider. 2010. Hyperproperties. Journal of Computer Security 18, 6 (2010), 1157--1210.
[6]
Anupam Datta, Matt Fredrikson, Gihyuk Ko, Piotr Mardziel, and Shayak Sen. 2017. Proxy non-discrimination in data-driven systems. arXiv preprint arXiv:1707.08120 (2017).
[7]
Anupam Datta, Matthew Fredrikson, Gihyuk Ko, Piotr Mardziel, and Shayak Sen. 2017. Use privacy in data-driven systems: Theory and experiments with machine learnt programs. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 1193--1210.
[8]
Anupam Datta, Shayak Sen, and Yair Zick. 2016. Algorithmic Transparency via Quantitative Input Influence. In Proceedings of 37th IEEE Symposium on Security and Privacy.
[9]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard S. Zemel. 2012. Fairness through awareness. In Innovations in Theoretical Computer Science 2012, Cambridge, MA, USA, January 8-10, 2012. 214--226.
[10]
Cynthia Dwork, Nicole Immorlica, Adam Tauman Kalai, and Mark DM Leiserson. 2018. Decoupled classifiers for group-fair and efficient machine learning. In Conference on Fairness, Accountability and Transparency. 119--133.
[11]
Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and Removing Disparate Impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, August 10-13, 2015. 259--268.
[12]
Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. 2017. Fairness testing: testing software for discrimination. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. ACM, 498--510.
[13]
Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of Opportunity in Supervised Learning. CoRR abs/1610.02413 (2016). http://arxiv.org/abs/1610.02413
[14]
Wassily Hoeffding. 1963. Probability inequalities for sums of bounded random variables. Journal of the American statistical association 58, 301 (1963), 13--30.
[15]
Faisal Kamiran and Toon Calders. 2009. Classifying without discriminating. In Computer, Control and Communication, 2009. IC4 2009. 2nd International Conference on. IEEE, 1--6.
[16]
Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Sch�lkopf. 2017. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems. 656--666.
[17]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In Advances in Neural Information Processing Systems. 4066--4076.
[18]
Insup Lee, Oleg Sokolsky, et al. 2005. RT-MaC: Runtime monitoring and checking of quantitative and probabilistic properties. Departmental Papers (CIS) (2005), 179.
[19]
Insup Lee, Oleg Sokolsky, John Regehr, et al. 2007. Statistical runtime checking of probabilistic properties. In International Workshop on Runtime Verification. Springer, 164--175.
[20]
Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. 2018. Delayed Impact of Fair Machine Learning. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018.3156--3164. http://proceedings.mlr.press/v80/liu18c.html
[21]
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. (2017).
[22]
Adrian Sampson, Pavel Panchekha, Todd Mytkowicz, Kathryn S McKinley, Dan Grossman, and Luis Ceze. 2014. Expressing and verifying probabilistic assertions. In ACM SIGPLAN Notices, Vol. 49. ACM, 112--122.
[23]
F. TramÃĺr, V. Atlidakis, R. Geambasu, D. Hsu, J. Hubaux, M. Humbert, A. Juels, and H. Lin. 2017. FairTest: Discovering Unwarranted Associations in Data-Driven Applications. In 2017 IEEE European Symposium on Security and Privacy (EuroS P). 401--416.
[24]
Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, and Ion Stoica. 2010. Spark: Cluster Computing with Working Sets. In Proceedings of the 2Nd USENIX Conference on Hot Topics in Cloud Computing (HotCloud'10). USENIX Association, Berkeley, CA, USA, 10--10. http://dl.acm.org/citation.cfm?id=1863103.1863113
[25]
Richard S. Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. 2013. Learning Fair Representations. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013. 325--333. http://jmlr.org/proceedings/papers/v28/zemel13.html

Cited By

View all
  • (2024)Program Analysis for Adaptive Data AnalysisProceedings of the ACM on Programming Languages10.1145/36564148:PLDI(914-938)Online publication date: 20-Jun-2024
  • (2024)Towards Runtime Monitoring for Responsible Machine Learning using Model-driven EngineeringProceedings of the ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems10.1145/3640310.3674092(195-202)Online publication date: 22-Sep-2024
  • (2024)Ethical Concerns With Regards to Artificial Intelligence: A National Public Poll in TaiwanIEEE Access10.1109/ACCESS.2024.345889312(133595-133605)Online publication date: 2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
FAT* '19: Proceedings of the Conference on Fairness, Accountability, and Transparency
January 2019
388 pages
ISBN:9781450361255
DOI:10.1145/3287560
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 January 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Assertion languages
  2. Fairness
  3. Probabilistic specifications
  4. Runtime monitoring
  5. Runtime verification

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

FAT* '19
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)328
  • Downloads (Last 6 weeks)38
Reflects downloads up to 22 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Program Analysis for Adaptive Data AnalysisProceedings of the ACM on Programming Languages10.1145/36564148:PLDI(914-938)Online publication date: 20-Jun-2024
  • (2024)Towards Runtime Monitoring for Responsible Machine Learning using Model-driven EngineeringProceedings of the ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems10.1145/3640310.3674092(195-202)Online publication date: 22-Sep-2024
  • (2024)Ethical Concerns With Regards to Artificial Intelligence: A National Public Poll in TaiwanIEEE Access10.1109/ACCESS.2024.345889312(133595-133605)Online publication date: 2024
  • (2024)MBFair: a model-based verification methodology for detecting violations of individual fairnessSoftware and Systems Modeling10.1007/s10270-024-01184-yOnline publication date: 10-Jun-2024
  • (2023)Runtime Monitoring of Dynamic Fairness PropertiesProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency10.1145/3593013.3594028(604-614)Online publication date: 12-Jun-2023
  • (2023)Online Fairness Auditing through Iterative RefinementProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3580305.3599454(1665-1676)Online publication date: 6-Aug-2023
  • (2023)Integrity 2023: Integrity in Social Networks and MediaProceedings of the Sixteenth ACM International Conference on Web Search and Data Mining10.1145/3539597.3572704(1269-1270)Online publication date: 27-Feb-2023
  • (2023)Runtime Monitoring of Human-Centric Requirements in Machine Learning Components: A Model-Driven Engineering Approach2023 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C)10.1109/MODELS-C59198.2023.00040(146-152)Online publication date: 1-Oct-2023
  • (2023)Monitoring Algorithmic Fairness Under Partial ObservationsRuntime Verification10.1007/978-3-031-44267-4_15(291-311)Online publication date: 1-Oct-2023
  • (2023)Monitoring Algorithmic FairnessComputer Aided Verification10.1007/978-3-031-37703-7_17(358-382)Online publication date: 17-Jul-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media