Bias and Fairness

Today’s post has AI and Policy news updates and our next installment on Bias and Policy: the fairness component.

News Items for February, 2020

  • OECD launched the OECD.AI Observatory, an online platform to shape and share AI policies across the globe. 
  • The White House released the American Artificial Intelligence Initiative:Year One Annual Report and supported the OECD policy.

Bias and Fairness

In terms of decision-making and policy, fairness can be defined as “the absence of any prejudice or favoritism towards an individual or a group based on their inherent or acquired characteristics”.  Six of the most used definitions are equalized odds, equal opportunity, demographic parity, fairness through unawareness or group unaware, treatment equality. 

The concept of equalized odds and equal opportunity is that individuals who qualify for a desirable outcome should have an equal chance of being correctly assigned regardless of an individual’s belonging to a protected or unprotected group (e.g., female/male). The additional concepts “demographic parity” and “group unaware” are illustrated by the Google visualization research team with nice visualizations using an example “simulating loan decisions for different groups”. The focus of equal opportunity is on the outcome of the true positive rate of the group.

On the other hand, the focus of the demographic parity is on the positive rate only. Consider a loan approval process for two groups: group A and group B. For demographic parity, the overall number of approved loans should be equal in both group A and group B regardless of a person belonging to a protected group. Since the focus for demographic parity is on overall loan approval rate, the rate should be equal for both the groups. Some people in group A who would pay back the loan might be disadvantaged compared to the people in group B who might not pay back the loan.  However, the people in group A will not be at a disadvantage in the equal opportunity concept, since this concept focuses on true positive rate. As an example of fairness through unawareness “an algorithm is fair as long as any protected attributes A are not explicitly used in the decision-making process”.

All of the fairness concepts or definitions either fall under individual fairness, subgroup fairness or group fairness. For example, demographic parity, equalized odds, and equal opportunity are the group fairness type; fairness through awareness falls under the individual type where the focus is not on the overall group.

A definition of bias can be in three categories: data, algorithmic, and user interaction feedback loop:
Data — behavioral bias, presentation bias, linking bias, and content production bias;
Algoritmic — historical bias, aggregation bias, temporal bias, and social bias falls
User Interaction — popularity bias, ranking bias, evaluation bias, and emergent bias.

Bias is a large domain with much to explore and take into consideration. Bias and public policy will be further discussed in future blog posts.

This series of posts on Bias has been co-authored by Farhana Faruqe, doctoral student in the GWU Human-Technology Collaboration group.

References 
 [1] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. CoRR, abs/1908.09635, 2019.
[2] Moritz Hardt, Eric Price, , and Nati Srebro. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.). Curran Associates, Inc., 3315–3323. http://papers.nips.cc/paper/ 6374-equality-of-opportunity-in-supervised-learning.pdf
[3] Martin Wattenberg, Fernanda Viegas, and Moritz Hardt. Attacking discrimination with smarter machine learning. Accessed at https://research.google.com/bigpicture/attacking-discrimination-in-ml/, 2016

Leave a Reply

Your email address will not be published. Required fields are marked *