Task 2: Fair Predictions on COMPAS

In this vignette, we focus on Task 2 (Fair Prediction), and specifically focus on the limitation of the currently found methods in the literature to provide causally meaningful fair predictions.

Fair Predictions on COMPAS

A team of data scientists from ProPublica have shown that the COMPAS dataset from Broward County contains a strong racial bias against minorities. They are now interested in producing fair predictions \(\widehat{Y}\) on the dataset, to replace the biased predictions. They first load the COMPAS data:

# load the data
dat <- get(data("compas", package = "faircause"))
knitr::kable(head(dat), caption = "COMPAS dataset.")
COMPAS dataset.
sex age race juv_fel_count juv_misd_count juv_other_count priors_count c_charge_degree two_year_recid
Male 69 Non-White 0 0 0 0 F 0
Male 34 Non-White 0 0 0 0 F 1
Male 24 Non-White 0 0 1 4 F 1
Male 23 Non-White 0 1 0 1 F 0
Male 43 Non-White 0 0 0 2 F 0
Male 44 Non-White 0 0 0 0 M 0
# load the SFM projection
mdat <- SFM_proj("compas")

To produce fair predictions, they first experiment with four different classifiers.

(i) Random Forest without fairness constraints

# Method 1: Random Forest
org_dat <- dat
org_dat$two_year_recid <- ranger(two_year_recid ~ ., dat,
                                 classification = TRUE)$predictions

# Method 1: decompose Total Variation
org_tvd <- fairness_cookbook(org_dat, mdat[["X"]], mdat[["Z"]], mdat[["W"]],
                             mdat[["Y"]], mdat[["x0"]], mdat[["x1"]])
org_plt <- autoplot(org_tvd, decompose = "xspec", dataaset = dataset) +
  xlab("") + ylab("")

(ii) Logistic regression trained with reweighing (Kamiran and Calders 2012)

import os

exec(open(os.path.join(r.root, "py", "reweighing_compas.py")).read())
WARNING:root:
`load_boston` has been removed from scikit-learn since version 1.2.

The Boston housing prices dataset has an ethical problem: as
investigated in [1], the authors of this dataset engineered a
non-invertible variable "B" assuming that racial self-segregation had a
positive impact on house prices [2]. Furthermore the goal of the
research that led to the creation of this dataset was to study the
impact of air quality but it did not give adequate demonstration of the
validity of this assumption.

The scikit-learn maintainers therefore strongly discourage the use of
this dataset unless the purpose of the code is to study and educate
about ethical issues in data science and machine learning.

In this special case, you can fetch the dataset from the original
source::

    import pandas as pd
    import numpy as np

    data_url = "http://lib.stat.cmu.edu/datasets/boston"
    raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None)
    data = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]])
    target = raw_df.values[1::2, 2]

Alternative datasets include the California housing dataset and the
Ames housing dataset. You can load the datasets as follows::

    from sklearn.datasets import fetch_california_housing
    housing = fetch_california_housing()

for the California housing dataset and::

    from sklearn.datasets import fetch_openml
    housing = fetch_openml(name="house_prices", as_frame=True)

for the Ames housing dataset.

[1] M Carlisle.
"Racist data destruction?"
<https://medium.com/@docintangible/racist-data-destruction-113e3eff54a8>

[2] Harrison Jr, David, and Daniel L. Rubinfeld.
"Hedonic housing prices and the demand for clean air."
Journal of environmental economics and management 5.1 (1978): 81-102.
<https://www.researchgate.net/publication/4974606_Hedonic_housing_prices_and_the_demand_for_clean_air>
: LawSchoolGPADataset will be unavailable. To install, run:
pip install 'aif360[LawSchoolGPA]'
WARNING:root:No module named 'tensorflow': AdversarialDebiasing will be unavailable. To install, run:
pip install 'aif360[AdversarialDebiasing]'
WARNING:root:No module named 'tensorflow': AdversarialDebiasing will be unavailable. To install, run:
pip install 'aif360[AdversarialDebiasing]'
Yhat_rew = reweigh_and_predict(r.dat, r.dat)
# Method 2: Reweighing by Kamiran & Calders
reweigh_dat <- dat
reweigh_dat$two_year_recid <- as.vector(py$Yhat_rew)

reweigh_tvd <- fairness_cookbook(
  reweigh_dat, mdat[["X"]], mdat[["Z"]], mdat[["W"]], mdat[["Y"]], mdat[["x0"]],
  mdat[["x1"]])

rew_plt <- autoplot(reweigh_tvd, decompose = "xspec", dataset = dataset) +
  xlab("") + ylab("")

(iii) Fair Reductions approach of (Agarwal et al. 2018)

exec(open(os.path.join(r.root, "py", "reductions_compas.py")).read())
Yhat_red = reduce_and_predict(r.dat, r.dat, 0.01)
# Method 3: Reductions (Agarwal et. al.)
source_python(file.path(root, "py", paste0("reductions_", dataset, ".py")))

red_dat <- dat
red_dat$two_year_recid <- as.vector(py$Yhat_red)
red_tvd <- fairness_cookbook(
  red_dat, mdat[["X"]], mdat[["W"]], mdat[["Z"]], mdat[["Y"]], mdat[["x0"]],
  mdat[["x1"]]
)

red_plt <- autoplot(red_tvd, decompose = "xspec", dataset = dataset) +
  xlab("") + ylab("")

(iv) Random Forest with reject-option post-processing (Kamiran, Karim, and Zhang 2012)

# Method 4: reject-option classification
rjo_prb <- ranger(two_year_recid ~ ., dat, probability = TRUE)$predictions[, 2]
rjo_dat <- dat
rjo_dat$two_year_recid <- RejectOption(rjo_prb, rjo_dat$race)

rjo_tvd <- fairness_cookbook(rjo_dat, mdat[["X"]], mdat[["W"]], mdat[["Z"]],
                             mdat[["Y"]], mdat[["x0"]], mdat[["x1"]])
rjo_plt <- autoplot(rjo_tvd, decompose = "xspec", dataset = dataset) +
  xlab("") + ylab("")

Are the methods successful at eliminating discrimination?

The fair prediction algorithms used above are intended to set the TV measure to \(0\). After constructing these predictors, the ProPublica team made use of the Fairness Cookbook, to inspect the causal implications of the methods. Following the steps of the Fairness Cookbook, the team computes the TV measure, together with the appropriate measures of direct, indirect, and spurious discrimination. We can now visualize these decompositions in Figure 1.

Figure 1: Fair Predictions on the COMPAS dataset.

The ProPublica team notes that all methods substantially reduce the \(\text{TV}_{x_0,x_1}(\widehat{y})\), however, the measures of direct, indirect, and, spurious effects are not necessarily reduced to \(0\), consistent with the Fair Prediction Theorem.

How to fix the issue?

To produce causally meaningful fair predictions, we suggest using the fairadapt package (Plečko and Meinshausen 2020; Plečko, Bennett, and Meinshausen 2021). In particular, the package offers a way of removing discrimination which is based on the causal diagram. In this application, we are interested in constructing fair predictions that remove both the direct and the indirect effect. Firstly, we obtain the adjacency matrix representing the causal diagram associated with the COMPAS dataset:

set.seed(2022)
# load the causal diagram
mats <- get_mat(dataset)
adj.mat <- mats[[1L]]
cfd.mat <- mats[[2L]]

causal_graph <- fairadapt::graphModel(adj.mat, cfd.mat)
layout_matrix <- matrix(c(
  1, 2,  # age
  -1, 2,  # sex
  -2, -2,  # juv_fel_count
  -1, -2, # juv_misd_count
  0, -2, # juv_other_count
  1, -2,  # priors_count
  2, -2,  # c_charge_degree
  -3, 0,  # race
  3, 0    # two_year_recid
), ncol = 2, byrow = TRUE)

plot(causal_graph, layout=layout_matrix, vertex.size = 30)

After loading the causal diagram, we perform fair data adaptation:

fdp <- fairadapt::fairadapt(two_year_recid ~ ., prot.attr = "race",
                            train.data = dat, adj.mat = adj.mat)

# obtain the adapted data
ad_dat <- fairadapt:::adaptedData(fdp)
ad_dat$race <- dat$race

# obtain predictions based on the adapted data
adapt_oob <- ranger(two_year_recid ~ ., ad_dat,
                    classification = TRUE)$predictions
ad_dat$two_year_recid <- adapt_oob

# decompose the TV for the predictions
dat.fairadapt <- dat
dat.fairadapt$two_year_recid <- adapt_oob

fairadapt_tvd <- fairness_cookbook(
  ad_dat, mdat[["X"]], mdat[["W"]], mdat[["Z"]],
  mdat[["Y"]], mdat[["x0"]], mdat[["x1"]]
)

# visualize the decomposition
fairadapt_plt <- autoplot(fairadapt_tvd, decompose = "xspec", dataset = dataset) +
  xlab("") + ylab("")

fairadapt_plt

Figure 2: Fair Data Adaptation on the COMPAS dataset.

Figure 2 shows how the fairadapt package can be used to provide causally meaningful predictions that remove both direct and indirect effects.

References

Agarwal, Alekh, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. 2018. “A Reductions Approach to Fair Classification.” In International Conference on Machine Learning, 60–69. PMLR.
Kamiran, Faisal, and Toon Calders. 2012. “Data Preprocessing Techniques for Classification Without Discrimination.” Knowledge and Information Systems 33 (1): 1–33.
Kamiran, Faisal, Asim Karim, and Xiangliang Zhang. 2012. “Decision Theory for Discrimination-Aware Classification.” In 2012 IEEE 12th International Conference on Data Mining, 924–29. IEEE.
Plečko, Drago, Nicolas Bennett, and Nicolai Meinshausen. 2021. “Fairadapt: Causal Reasoning for Fair Data Pre-Processing.” arXiv Preprint arXiv:2110.10200.
Plečko, Drago, and Nicolai Meinshausen. 2020. “Fair Data Adaptation with Quantile Preservation.” Journal of Machine Learning Research 21: 242.