Auditors#

Auditors included in the package

class inFairness.auditor.Auditor[source]#

Abstract class for model auditors, e.g. Sensei or Sensr

audit(*args, **kwargs)[source]#

Audit model for individual fairness

compute_audit_result(loss_ratios, threshold=None, confidence=0.95)[source]#

Computes auditing statistics given loss ratios and user-specified acceptance threshold

Parameters:
  • loss_ratios (numpy.ndarray) – List of loss ratios between worst-case and normal data samples

  • threshold (float. optional) – User-specified acceptance threshold value If a value is not specified, the procedure simply returns the mean and lower bound of loss ratio, leaving the detemination of models’ fairness to the user. If a value is specified, the procedure also determines if the model is individually fair or not.

  • confidence (float, optional) – Confidence value. Default = 0.95

Returns:

audit_result – Data interface with auditing results and statistics

Return type:

AuditorResponse

compute_loss_ratio(X_audit, X_worst, Y_audit, network, loss_fn)[source]#

Compute the loss ratio of samples computed by solving gradient flow attack to original audit samples

Parameters:
  • X_audit (torch.Tensor) – Auditing samples. Shape (n_samples, n_features)

  • Y_audit (torch.Tensor) – Labels of auditing samples. Shape: (n_samples)

  • lambda_param (float) – Lambda weighting parameter as defined in the equation above

Returns:

loss_ratios – Ratio of loss for samples computed using gradient flow attack to original audit samples

Return type:

numpy.ndarray

generate_worst_case_examples(*args, **kwargs)[source]#

Generates worst-case example for the input data sample batch