inFairness#
Intuitively, an individually fair Machine Learning (ML) model treats similar inputs similarly. Formally, the leading notion of individual fairness is metric fairness (Dwork et al., 2011); it requires:
Here,
inFairness is a PyTorch package that supports auditing, training, and post-processing ML models for individual fairness. At its core, the library implements the key components of individual fairness pipeline:
For an in-depth tutorial of Individual Fairness and the inFairness package, please watch this tutorial. Also, take a look at the examples folder for illustrative use-cases.
Installation#
inFairness can be installed using pip
:
pip install inFairness
Alternatively, if you wish to install the latest development version, you can install directly by cloning the code repository from the GitHub repo:
git clone https://github.com/IBM/inFairness
cd inFairness
pip install -e .
Features#
inFairness currently supports:
Training of individually fair models : [Docs]
Auditing pre-trained ML models for individual fairness : [Docs]
Post-processing for Individual Fairness : [Docs]
The package implements the following components:
Algorithms#
Auditors#
Metrics#
Post-Processing algorithms#
API Documentation#
Index
Package Reference
- API Documentation
- Algorithms
- Auditors
- Distances
- Post-Processing
- Utilities
- Development
- Changelog
- GitHub Repository