Auditors#

SenSR Auditor#

class inFairness.auditor.SenSRAuditor(loss_fn, distance_x, num_steps, lr, max_noise=0.1, min_noise=-0.1)[source]#

SenSR Auditor implements the functionality to generate worst-case examples by solving the following optimization equation:

\[x_{t_b}^* \gets arg\max_{x \in X} l((x,y_{t_b}),h) - \lambda d_x^2(x_{t_b},x)\]

Proposed in Training individually fair ML models with sensitive subspace robustness

Parameters:
  • loss_fn (torch.nn.Module) – Loss function

  • distance_x (inFairness.distances.Distance) – Distance metric in the input space

  • num_steps (int) – Number of update steps should the auditor perform to find worst-case examples

  • lr (float) – Learning rate

audit(network, X_audit, Y_audit, audit_threshold=None, lambda_param=None, confidence=0.95, optimizer=None)[source]#

Audit a model for individual fairness

Parameters:
  • network (torch.nn.Module) – PyTorch network model

  • X_audit (torch.Tensor) – Auditing data samples. Shape: (B, *)

  • Y_audit (torch.Tensor) – Auditing data samples. Shape: (B)

  • loss_fn (torch.nn.Module) – Loss function

  • audit_threshold (float, optional) – Auditing threshold to consider a model individually fair or not If audit_threshold is specified, the audit procedure determines if the model is individually fair or not. If audit_threshold is not specified, the audit procedure simply returns the mean and lower bound of loss ratio, leaving the determination of models’ fairness to the user. Default=None

  • lambda_param (float) – Lambda weighting parameter as defined in the equation above

  • confidence (float, optional) – Confidence value. Default = 0.95

  • optimizer (torch.optim.Optimizer, optional) – PyTorch Optimizer object. Default: torch.optim.SGD

Returns:

audit_response – Audit response containing test statistics

Return type:

inFairness.auditor.datainterface.AuditorResponse

generate_worst_case_examples(network, x, y, lambda_param, optimizer=None)[source]#

Generate worst case example given the input data sample batch x

Parameters:
  • network (torch.nn.Module) – PyTorch network model

  • x (torch.Tensor) – Batch of input datapoints

  • y (torch.Tensor) – Batch of output datapoints

  • lambda_param (float) – Lambda weighting parameter as defined in the equation above

  • optimizer (torch.optim.Optimizer, optional) – PyTorch Optimizer object

Returns:

X_worst – Worst case examples for the provided input datapoints

Return type:

torch.Tensor

SenSeI Auditor#

class inFairness.auditor.SenSeIAuditor(distance_x, distance_y, num_steps, lr, max_noise=0.1, min_noise=-0.1)[source]#

SenSeI Auditor implements the functionality to generate worst-case examples by solving the following optimization equation:

\[x_{t_b}' \gets arg\max_{x' \in X}\{d_{Y}(h_{\theta_t}(X_{t_b}),h_{\theta_t}(x')) - \lambda_t d_{X}(X_{t_b},x')\}\]

Proposed in SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness

Parameters:
  • distance_x (inFairness.distances.Distance) – Distance metric in the input space

  • distance_y (inFairness.distances.Distance) – Distance metric in the output space

  • num_steps (int) – Number of update steps should the auditor perform to find worst-case examples

  • lr (float) – Learning rate

audit(network, X_audit, Y_audit, loss_fn, audit_threshold=None, lambda_param=None, confidence=0.95, optimizer=None)[source]#

Audit a model for individual fairness

Parameters:
  • network (torch.nn.Module) – PyTorch network model

  • X_audit (torch.Tensor) – Auditing data samples. Shape: (B, *)

  • Y_audit (torch.Tensor) – Auditing data samples. Shape: (B)

  • loss_fn (torch.nn.Module) – Loss function

  • audit_threshold (float, optional) – Auditing threshold to consider a model individually fair or not If audit_threshold is specified, the audit procedure determines if the model is individually fair or not. If audit_threshold is not specified, the audit procedure simply returns the mean and lower bound of loss ratio, leaving the determination of models’ fairness to the user. Default=None

  • lambda_param (float) – Lambda weighting parameter as defined in the equation above

  • confidence (float, optional) – Confidence value. Default = 0.95

  • optimizer (torch.optim.Optimizer, optional) – PyTorch Optimizer object. Default: torch.optim.SGD

Returns:

audit_response – Audit response containing test statistics

Return type:

inFairness.auditor.datainterface.AuditorResponse

generate_worst_case_examples(network, x, lambda_param, optimizer=None)[source]#

Generate worst case example given the input data sample batch x

Parameters:
  • network (torch.nn.Module) – PyTorch network model

  • x (torch.Tensor) – Batch of input datapoints

  • lambda_param (float) – Lambda weighting parameter as defined in the equation above

  • optimizer (torch.optim.Optimizer, optional) – PyTorch Optimizer object. Default: torch.optim.Adam

Returns:

X_worst – Worst case examples for the provided input datapoints

Return type:

torch.Tensor

SenSTIR Auditor#

class inFairness.auditor.SenSTIRAuditor(distance_x: MahalanobisDistances, distance_y: MahalanobisDistances, num_steps: int, lr: float, max_noise: float = 0.1, min_noise: float = -0.1)[source]#

SenSTIR Auditor generates worst-case examples by solving the following optimization problem:

\[q^{'} \gets arg\max_{q^{'}}\{||h_{\theta_t}(q),h_{\theta_t}(q^{'})||_{2}^{2} - \lambda_t d_{Q}(q,q^{'})\}\]

At a high level, it will find \(q^{'}\) such that it maximizes the score difference, while keeping a fair set distance distance_q with the original query q small.

Proposed in Individually Fair Rankings

Parameters:
  • distance_x (inFairness.distances.Distance) – Distance metric in the input space. Should be an instance of MahalanobisDistance

  • distance_y (inFairness.distances.Distance) – Distance metric in the output space. Should be an instance of MahalanobisDistance

  • num_steps (int) – number of optimization steps taken to produce the worst examples.

  • lr (float) – learning rate of the optimization

  • max_noise (float) – range of a uniform distribution determining the initial noise added to q to form q’

  • min_noise (float) – range of a uniform distribution determining the initial noise added to q to form q’

generate_worst_case_examples(network, Q, lambda_param, optimizer=None)[source]#

Generate worst case examples given the input sample batch of queries Q (dimensions batch_size,num_items,num_features)

Parameters:
  • network (torch.nn.Module) – PyTorch network model that outputs scores per item

  • Q (torch.Tensor) – tensor with dimensions batch_size, num_items, num_features containing the batch of queries for ranking

  • lambda_param (torch.float) – Lambda weighting parameter as defined above

  • optimizer (torch.optim.Optimizer, optional) – Pytorch Optimizer object

Returns:

q_worst – worst case queries for the provided input queries Q

Return type:

torch.Tensor