{"id":552,"date":"2020-02-28T22:17:40","date_gmt":"2020-02-28T22:17:40","guid":{"rendered":"http:\/\/sigai.acm.org\/aimatters\/blog\/?p=552"},"modified":"2020-02-28T22:17:40","modified_gmt":"2020-02-28T22:17:40","slug":"bias-and-fairness","status":"publish","type":"post","link":"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/","title":{"rendered":"Bias and Fairness"},"content":{"rendered":"\n<p><\/p>\n\n\n\n<p>Today\u2019s post has AI and Policy news updates and our next installment on Bias and Policy: the fairness component.<\/p>\n\n\n\n<p><strong>News Items for February, 2020<\/strong><\/p>\n\n\n\n<ul><li>OECD launched the&nbsp;<a href=\"https:\/\/www.oecd.org\/going-digital\/ai\/about-the-oecd-ai-policy-observatory.pdf\">OECD.AI Observatory<\/a>, an online platform to shape and share AI policies across the\nglobe.&nbsp;<\/li><li>The White House <a href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2020\/02\/American-AI-Initiative-One-Year-Annual-Report.pdf\">released<\/a> the American Artificial Intelligence Initiative:Year One Annual\nReport and supported the OECD policy.<\/li><\/ul>\n\n\n\n<p><strong>Bias and Fairness<\/strong><\/p>\n\n\n\n<p>In terms of decision-making and policy, fairness can be <a href=\"https:\/\/arxiv.org\/pdf\/1908.09635.pdf\">defined<\/a> as \u201cthe absence of any prejudice or favoritism towards an\nindividual or a group based on their inherent or acquired\ncharacteristics\u201d.&nbsp; Six of the most used definitions are equalized odds,\nequal opportunity, demographic parity, fairness through unawareness or group\nunaware, treatment equality.&nbsp;<\/p>\n\n\n\n<p>The <a href=\"https:\/\/arxiv.org\/abs\/1610.02413\">concept of equalized odds and equal opportunity<\/a> is that individuals who qualify for a desirable outcome should have an equal chance of being correctly assigned regardless of an individual&#8217;s belonging to a protected or unprotected group (e.g., female\/male). The additional concepts \u201cdemographic parity\u201d and \u201cgroup unaware\u201d are illustrated by the Google <a href=\"https:\/\/research.google.com\/bigpicture\/attacking-discrimination-in-ml\/\">visualization research team<\/a> with nice visualizations using an example \u201csimulating loan decisions for different groups\u201d. The focus of equal opportunity is on the outcome of the true positive rate of the group. <\/p>\n\n\n\n<p>On the other hand, the focus of the demographic parity is on the positive rate only. Consider a loan approval process for two groups: group A and group B. For demographic parity, the overall number of approved loans should be equal in both group A and group B regardless of a person belonging to a protected group. Since the focus for demographic parity is on overall loan approval rate, the rate should be equal for both the groups. Some people in group A who would pay back the loan might be disadvantaged compared to the people in group B who might not pay back the loan.\u00a0 However, the people in group A will not be at a disadvantage in the equal opportunity concept, since this concept focuses on true positive rate. As an <a href=\"https:\/\/arxiv.org\/pdf\/1908.09635.pdf\">example<\/a> of fairness through unawareness &#8220;an algorithm is fair as long as any protected attributes A are not explicitly used in the decision-making process&#8221;. <\/p>\n\n\n\n<p>All of the fairness concepts or definitions either fall under individual fairness, subgroup fairness or group fairness. For example, demographic parity, equalized odds, and equal opportunity are the group fairness type; fairness through awareness falls under the individual type where the focus is not on the overall group.<\/p>\n\n\n\n<p>A definition of bias can be in <a href=\"https:\/\/arxiv.org\/pdf\/1908.09635.pdf\">three categories<\/a>: data, algorithmic, and user interaction feedback loop:<br><strong>Data<\/strong> &#8212; behavioral bias, presentation bias, linking bias, and content production bias;<br><strong>Algoritmic <\/strong>&#8212; historical bias, aggregation bias, temporal bias, and social bias falls <br><strong>User Interaction<\/strong> &#8212; popularity bias, ranking bias, evaluation bias, and emergent bias. <\/p>\n\n\n\n<p>Bias is a large domain with much to explore and take into consideration. Bias and public policy will be further discussed in future blog posts.<\/p>\n\n\n\n<p>This series of posts on Bias has been co-authored by Farhana Faruqe, doctoral student in the GWU Human-Technology Collaboration group.<\/p>\n\n\n\n<p><strong>References\u00a0 <\/strong><br>\u00a0[1] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. CoRR, abs\/1908.09635, 2019.<br>[2] Moritz Hardt, Eric Price, , and Nati Srebro. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.). Curran Associates, Inc., 3315\u20133323. http:\/\/papers.nips.cc\/paper\/ 6374-equality-of-opportunity-in-supervised-learning.pdf<br>[3] Martin Wattenberg, Fernanda Viegas, and Moritz Hardt. Attacking discrimination with smarter machine learning. Accessed at <a href=\"https:\/\/research.google.com\/bigpicture\/attacking-discrimination-in-ml\/\">https:\/\/research.google.com\/bigpicture\/attacking-discrimination-in-ml\/<\/a>, 2016<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Today\u2019s post has AI and Policy news updates and our next installment on Bias and Policy: the fairness component. News Items for February, 2020 OECD launched the&nbsp;OECD.AI Observatory, an online platform to shape and share AI policies across the globe.&nbsp; The White House released the American Artificial Intelligence Initiative:Year One Annual Report and supported the [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"categories":[12],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Bias and Fairness - ACM SIGAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Bias and Fairness - ACM SIGAI\" \/>\n<meta property=\"og:description\" content=\"Today\u2019s post has AI and Policy news updates and our next installment on Bias and Policy: the fairness component. News Items for February, 2020 OECD launched the&nbsp;OECD.AI Observatory, an online platform to shape and share AI policies across the globe.&nbsp; The White House released the American Artificial Intelligence Initiative:Year One Annual Report and supported the [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/\" \/>\n<meta property=\"og:site_name\" content=\"ACM SIGAI\" \/>\n<meta property=\"article:published_time\" content=\"2020-02-28T22:17:40+00:00\" \/>\n<meta name=\"author\" content=\"Larry Medsker\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Larry Medsker\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/\",\"url\":\"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/\",\"name\":\"Bias and Fairness - ACM SIGAI\",\"isPartOf\":{\"@id\":\"https:\/\/sigai.acm.org\/main\/#website\"},\"datePublished\":\"2020-02-28T22:17:40+00:00\",\"author\":{\"@id\":\"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/5097a3e1c76f2c205fe0f5ebb9b51fdb\"},\"breadcrumb\":{\"@id\":\"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/sigai.acm.org\/main\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Bias and Fairness\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/sigai.acm.org\/main\/#website\",\"url\":\"https:\/\/sigai.acm.org\/main\/\",\"name\":\"ACM SIGAI\",\"description\":\"ACM Special Interest Group on Artificial Intelligence\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/sigai.acm.org\/main\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/5097a3e1c76f2c205fe0f5ebb9b51fdb\",\"name\":\"Larry Medsker\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/a175bde07d4c8846a16bc64afa6e97f1?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/a175bde07d4c8846a16bc64afa6e97f1?s=96&d=mm&r=g\",\"caption\":\"Larry Medsker\"},\"url\":\"https:\/\/sigai.acm.org\/main\/author\/larrym\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Bias and Fairness - ACM SIGAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/","og_locale":"en_US","og_type":"article","og_title":"Bias and Fairness - ACM SIGAI","og_description":"Today\u2019s post has AI and Policy news updates and our next installment on Bias and Policy: the fairness component. News Items for February, 2020 OECD launched the&nbsp;OECD.AI Observatory, an online platform to shape and share AI policies across the globe.&nbsp; The White House released the American Artificial Intelligence Initiative:Year One Annual Report and supported the [&hellip;]","og_url":"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/","og_site_name":"ACM SIGAI","article_published_time":"2020-02-28T22:17:40+00:00","author":"Larry Medsker","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Larry Medsker","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/","url":"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/","name":"Bias and Fairness - ACM SIGAI","isPartOf":{"@id":"https:\/\/sigai.acm.org\/main\/#website"},"datePublished":"2020-02-28T22:17:40+00:00","author":{"@id":"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/5097a3e1c76f2c205fe0f5ebb9b51fdb"},"breadcrumb":{"@id":"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/sigai.acm.org\/main\/2020\/02\/28\/bias-and-fairness\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/sigai.acm.org\/main\/"},{"@type":"ListItem","position":2,"name":"Bias and Fairness"}]},{"@type":"WebSite","@id":"https:\/\/sigai.acm.org\/main\/#website","url":"https:\/\/sigai.acm.org\/main\/","name":"ACM SIGAI","description":"ACM Special Interest Group on Artificial Intelligence","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/sigai.acm.org\/main\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/5097a3e1c76f2c205fe0f5ebb9b51fdb","name":"Larry Medsker","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/a175bde07d4c8846a16bc64afa6e97f1?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a175bde07d4c8846a16bc64afa6e97f1?s=96&d=mm&r=g","caption":"Larry Medsker"},"url":"https:\/\/sigai.acm.org\/main\/author\/larrym\/"}]}},"_links":{"self":[{"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/posts\/552"}],"collection":[{"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/comments?post=552"}],"version-history":[{"count":0,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/posts\/552\/revisions"}],"wp:attachment":[{"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/media?parent=552"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/categories?post=552"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/tags?post=552"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}