FaceBook, Google, and Bias

Current events involving FaceBook and the use of data they collect and analyze relate to issues addressed by SIGAI and USACM working groups on algorithmic accountability, transparency, and bias. The players in this area of ethics and policy include those who are unaware of the issues and ones who intentionally use methods and systems with bias to achieve organizational goals. The issues around use of customer data in ways that are not transparent, or difficult to discover, not only have negative impacts on individuals and society, but they also are difficult to address because they are integral to business models upon which companies are based.

A Forbes recent article “Google’s DeepMind Has An Idea For Stopping Biased AI” discusses research that addresses AI systems that spread prejudices that humans have about race and gender – the issue that when artificial intelligence is trained with biased data,  biased decisions may be made. An example cited in the article include facial recognition systems shown to have difficulty properly recognizing black women.

Machine-learning software is rapidly becoming widely accessible to developers across the world, many of whom are not aware of the dangers of using data contain biases.  The Forbes piece discusses an article “Path-Specific Counterfactual Fairness,” by DeepMind researchers Silvia Chiappa and Thomas Gillam. Counterfactual fairness refers to methods of decision-making for machines and ways that fairness might automatically be determined. DeepMind has a new division, DeepMind Ethics & Society that addresses this and other issues on the ethical and social impacts of AI technology.

The Forbes article quotes Kriti Sharma, a consultant in artificial intelligence with Sage, the British enterprise software company as follows: “Understanding the risk of bias in AI is not a problem that that technologists can solve in a vacuum. We need collaboration between experts in anthropology, law, policy makers, business leaders to address the questions emerging technology will continue to ask of us. It is exciting to see increased academic research activity in AI fairness and accountability over the last 18 months, but in truth we aren’t seeing enough business leaders, companies applying AI, those who will eventually make AI mainstream in every aspect of our lives, take the same level of responsibility to create unbiased AI.”

One thought on “FaceBook, Google, and Bias”

  1. There are data which contain bias. This is a huge issue though. In Facebook, all your data are uploaded and inputted online for personal purposes. But those data are vulnerable to some skilled users. data kept online should be kept safe and secured.

Leave a Reply

Your email address will not be published.