The proliferation of machine learning (ML) models in high-risk social applications has raised concerns regarding fairness and transparency. Instances of biased decision-making have led to growing distrust among consumers who are subject to ML-based decisions.
To address this challenge and increase consumer confidence, technology that enables public verification of the fairness properties of these models is urgently needed. However, legal and privacy restrictions often prevent organizations from disclosing their models, making verification difficult and potentially leading to unfair behavior such as model sharing.
In response to these challenges, researchers from Stanford and UCSD proposed a system called FairProof. It consists of a fairness certification algorithm and a cryptographic protocol. The algorithm evaluates the fairness of the model on a specific data point using a metric known as local Individual Fairness (IF).
Its approach allows custom certificates to be issued to individual customers, making it suitable for customer-facing organizations. Importantly, the algorithm is designed to be independent of the training process, ensuring its applicability across various models and data sets.
Local IF certification is achieved by leveraging techniques from the robustness literature while ensuring compatibility with zero-knowledge proofs (ZKP) to maintain model confidentiality. ZKPs allow verification of claims about private data, such as fairness certificates, without revealing the weights of the underlying model.
To make the process computationally efficient, a specialized ZKP protocol is implemented, which strategically reduces the computational overhead through offline calculations and optimization of subfunctionalities.
Furthermore, model uniformity is ensured through cryptographic commitments, where organizations publicly commit to the weights of their models while keeping them confidential. Their approach, widely studied in the ML security literature, provides a means to maintain transparency and accountability while safeguarding sensitive model information.
By combining fairness certification with cryptographic protocols, FairProof offers a comprehensive solution to address fairness and transparency concerns in ML-based decision making, fostering greater trust among consumers and stakeholders alike.
Review the Paper. All credit for this research goes to the researchers of this project. Also, don't forget to follow us on twitter.com/Marktechpost”>twitter. Join our Telegram channel, Discord channeland LinkedIn Grabove.
If you like our work, you will love our Newsletter..
Don't forget to join our 42k+ ML SubReddit
<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter“/>
Arshad is an intern at MarktechPost. He is currently pursuing his international career. Master's degree in Physics from the Indian Institute of technology Kharagpur. Understanding things down to the fundamental level leads to new discoveries that lead to the advancement of technology. He is passionate about understanding nature fundamentally with the help of tools such as mathematical models, machine learning models, and artificial intelligence.
<script async src="//platform.twitter.com/widgets.js” charset=”utf-8″>