inFairness#



PyPI - Downloads


Intuitively, an individually fair Machine Learning (ML) model treats similar inputs similarly. Formally, the leading notion of individual fairness is metric fairness (Dwork et al., 2011); it requires:

\[ d_y (h(x_1), h(x_2)) \leq L d_x(x_1, x_2) \quad \forall \quad x_1, x_2 \in X \]

Here, \(h: X \rightarrow Y\) is a ML model, where \(X\) and \(Y\) are input and output spaces; \(d_x\) and \(d_y\) are metrics on the input and output spaces, and \(L \geq 0\) is a Lipchitz constant. This constrained optimization equation states that the distance between the model predictions for inputs \(x_1\) and \(x_2\) is upper-bounded by the fair distance between the inputs \(x_1\) and \(x_2\). Here, the fair metric \(d_x\) encodes our intuition of which samples should be treated similarly by the ML model, and in designing so, we ensure that for input samples considered similar by the fair metric \(d_x\), the model outputs will be similar as well.

inFairness is a PyTorch package that supports auditing, training, and post-processing ML models for individual fairness. At its core, the library implements the key components of individual fairness pipeline: \(d_x\) - distance in the input space, \(d_y\) - distance in the output space, and the learning algorithms to optimize for the equation above.

For an in-depth tutorial of Individual Fairness and the inFairness package, please watch this tutorial. Also, take a look at the examples folder for illustrative use-cases.


Installation#

inFairness can be installed using pip:

pip install inFairness

Alternatively, if you wish to install the latest development version, you can install directly by cloning the code repository from the GitHub repo:

git clone https://github.com/IBM/inFairness
cd inFairness
pip install -e .

Features#

inFairness currently supports:

  1. Training of individually fair models : [Docs]

  2. Auditing pre-trained ML models for individual fairness : [Docs]

  3. Post-processing for Individual Fairness : [Docs]

The package implements the following components:

Algorithms#

  1. Sensitive Set Invariance (SenSeI): [Paper], [Docs]

  2. Sensitive Subspace Robustness (SenSR): [Paper], [Docs]

Auditors#

  1. Sensitive Set Invariance (SenSeI) Auditor: [Paper], [Docs]

  2. Sensitive Subspace Robustness (SenSR) Auditor: [Paper], [Docs]

Metrics#

  1. Embedded Xenial Pair Logistic Regression Metric (EXPLORE): [Paper], [Docs]

  2. SVD Sensitive Subspace Metric: [Paper], [Docs]

Post-Processing algorithms#

  1. Graph Laplacian Individual Fairness (GLIF): [Paper], [Docs]


API Documentation#

Package Reference