2019–20 Projects:

*Advisor: Anna Rafferty*

*Times: Tuesday/Thursday 2:45-3:45. [You'll typically meet with me only on Thursdays.]*

Almost every day I see a news article discussing how big data and machine learning are leading to increasingly powerful predictive systems in diverse applications, such as assessing insurance risk, calculating credit scores, and as part of deciding who to hire for a job. These applications have real-world, high-stakes impacts on individuals, and the important of fairness to individuals in these types of applications has been recognized through laws that protect individuals against discrimination based on certain protected classes. Yet, many of the machine learning algorithms are not transparent in how they make decisions, and it can thus be difficult to determine whether an algorithm is treating individuals fairly - or even decide what it means for an algorithm to be fair.

There has been increasing excitement in the past few years about developing computational definitions of fairness and discrimination. Applying these metrics has led to several realizations:

- Many machine learning algorithms are systematically incorrect in their predictions about subgroups of the population, and specifically often towards disadvantaged subgroups.
- Algorithms can both reproduce biases found in the training data, and amplify these biases in their output.
- Because many features of individuals are correlated, an algorithm that does not explicitly use some piece of information (e.g., gender) may still have output that differs systematically based on that piece of information.

These realizations are troubling, and have sparked significant interest in computational ways to try to **mitigate** algorithmic unfairness.

In this project, you'll be exploring different ways of characterizing fairness and algorithmic approaches to improving fairness outcomes. Specifically, you will:

- Research notions of what fairness means when applied to an algorithm and what it means for a machine learning classification algorithm to give biased results.
- Implement fairness-sensitive versions of 2-3 classification algorithms.
- Identify 1-2 papers that apply a classification algorithm to a publicly available dataset (or dataset the authors will share with us) and compare the results using the fairness-sensitive algorithms you implemented to their original results. If they did not use one of the classification algorithms that your fairness-sensitive algorithms are based on, then you'll likely also want to apply the non-fairness sensitive versions of the algorithms to see the differences. The choice of papers and dataset will be informed based on your research about computational definitions of fairness.
- Hypothesize about the causes of differences for these datasets between (a) the results from the fairness-sensitive version of each algorithm versus the original version of each algorithm, and (b) the results from each of the fairness sensitive algorithms. You'll be making sense of the patterns you see in terms of the characteristics of the algorithm and the specific type of unfairness it's intended to mitigate as well as the characteristics of the dataset.
- (If time permits) Investigate why biased or unfair results occur more systematically by exploring algorithms for interpreting what features are being used by a black-box algorithm.

(Wondering how this project is different from the other comps project focused on algorithmic fairness? That project is focused specifically on two criminal justice datasets and understanding what previous analyses of the datasets have uncovered as well as applying fair classification algorithms to these datasets. In this project, we will not be focused on criminal justice datasets, and will focus somewhat more on algorithmic approaches for mitigating unfairness.)

- You will write a paper that provides a literature review about definitions of fairness in machine learning and approaches to developing fairer algorithms. The paper will also describe the algorithms that you implemented and why these algorithms should lead to fairer outcomes. Finally, the paper will summarize your investigation of the application of fairness-sensitive algorithms to existing work, including your results and well-supported hypotheses about why the algorithms' results differed from one another in particular ways.
- You will also create well-documented code that directly produces the results discussed in your paper.

In this project, you'll be moving frequently between mathematical definitions of fairness, conceptual ideas about what it means for something to be fair, and algorithmic instantiations of these ideas. You don't need to have previous experience with this kind of work, but you do need to be willing (and hopefully excited!) to be engaged in algorithmic and mathematical analyses. Previous experience working with large datasets may be helpful but not necessary. Some courses that may be useful but are not required are Algorithms, Advanced Algorithms, Artificial Intelligence, Data Mining, Computational Models of Cognition, Data Science, or Linear Algebra.

Below are a few papers about some instances where algorithmic bias has been uncovered, what it means for an algorithm to be fair, and some strategies that researchers have used to try to create algorithms that are less biased. Note that these references are intended to provide a very minimal start for your literature search - they are certainly not the only nor necessarily the best sources for ideas. You will be finding and reading many additional papers!

- Courtland, R. (2018). Bias detectives: the researchers striving to make algorithms fair. Nature, 558(7710), 357.
This article discusses a range of examples of algorithmic bias, including where biases have had an impact in the criminal justice system. In this comps project, we won't be focusing on the criminal justice system because there's another great comps project this year where that will be the focus!

- Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems (pp. 3315-3323).
This paper discusses what it means for an algorithm to be fair and shows a case study about fairness with regards to race when making loan decisions based on credit scores.

- Zhao, Jieyu, Wang, Tianlu, Yatskar, Mark, Ordonez, Vicente, and Chang, Kai-Wei. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Empirical Methods in Natural Language Processing, 2017.
This paper shows an example where a dataset has a particular skew - images about cooking more often involve women than men - but the algorithm that's learning about images amplifies that bias, leading to it being extremely unlikely to say that an image about cooking includes a man.

- Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015, August). Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 259-268). ACM.
This paper introduces a particular definition of what it means for an algorithm to be biased based on legal definitions of bias.