{"id":547,"date":"2020-02-20T15:46:40","date_gmt":"2020-02-20T15:46:40","guid":{"rendered":"http:\/\/sigai.acm.org\/aimatters\/blog\/?p=547"},"modified":"2020-02-20T15:46:40","modified_gmt":"2020-02-20T15:46:40","slug":"discrimination-and-bias","status":"publish","type":"post","link":"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/","title":{"rendered":"Discrimination and Bias"},"content":{"rendered":"\n<p>Our current public policy posts, focused on ethics and bias in current and emerging areas of AI, build on the <a href=\"https:\/\/arxiv.org\/pdf\/1908.09635.pdf\">work<\/a> \u201cA Survey on Bias and Fairness in Machine Learning\u201d by Ninareh Mehrabi, <em>et al.<\/em> and <a href=\"https:\/\/fairmlbook.org\/index.html\">resources<\/a> provided by Barocas, <em>et al<\/em>. The guest co-author of this series of blog posts on AI and bias is Farhana Faruqe, doctoral student in the George Washington University Human-Technology Collaboration program. We look forward to your comments and suggestions. <\/p>\n\n\n\n<p>Discrimination, unfairness, and bias are terms used frequently these days in the context of&nbsp;AI and data science applications that make decisions in the everyday lives of individuals and groups. Machine learning applications depend on datasets that are usually a reflection of our real world in which individuals have intentional and unintentional biases that may cause discrimination and unfair actions. Broadly, fairness is the absence of any prejudice or favoritism towards an individual or a group based on their intrinsic or acquired traits in the context of decision-making. <\/p>\n\n\n\n<p>Today\u2019s blog\npost focuses on discrimination, which Ninareh Mehrabi, <em>et\nal.<\/em> describe as follows:<\/p>\n\n\n\n<p><strong>Direct\nDiscrimination:<\/strong>&nbsp;\u201cDirect discrimination happens when\nprotected attributes of individuals explicitly result in non-favorable outcomes\ntoward them.\u201d&nbsp; Some traits like race, color, national origin, religion,\nsex, family status, disability, marital status, recipient of public assistance,\nand age are identified as sensitive attributes or protected attributes in the\nmachine learning world.&nbsp; It is not legal to discriminate against these\nsensitive attributes, which are listed by the FHA and Equal Credit Opportunity\nAct (ECOA). &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <\/p>\n\n\n\n<p><strong>Indirect\nDiscrimination: <\/strong>Even if sensitive or protected attributes\nare not used against an individual, still indirect discrimination can happen.\nFor example, residential zip code is not categorized as a protected attribute,\nbut from the zip code one may find out about race which is a protected\nattribute. So, \u201cprotected groups or individuals still can get treated unjustly\nas a result of implicit effects from their protected attributes.\u201d<\/p>\n\n\n\n<p><strong>Systemic\nDiscrimination<\/strong>. In the nursing profession, the custom is\nto expect a nurse to be a woman. So, excluding qualified male nurses for\nnursing position is an example of&nbsp;systematic discrimination. Systematic\ndiscrimination is defined as \u201cpolicies, customs, or behaviors that are a part\nof the culture or structure of an organization that may perpetuate\ndiscrimination against certain subgroups of the population\u201d.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <br>\n<strong>Statistical Discrimination<\/strong>: In law enforcement, racial profiling is an\nexample of statistical discrimination. In this case, minority drivers are\npulled over more often than white drivers. The authors define \u201cstatistical\ndiscrimination is a phenomenon where decision-makers use average group\nstatistics to judge an individual belonging to that group.\u201d<\/p>\n\n\n\n<p><strong>Explainable Discrimination: <\/strong>In some cases, \u201cdiscrimination can be explained using attributes\u201d like working hours and education, which is legal and acceptable as well. In a widely used dataset in the fairness domain, males on average have a higher annual income than females because on average females work fewer hours per week than males do. Decisions made without considering working hours could lead to discrimination. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;<\/p>\n\n\n\n<p><strong>Unexplainable Discrimination<\/strong>: This type of discrimination is not legal as explainable discrimination because \u201cthe discrimination toward a group is unjustified\u201d. Some researchers have introduced techniques during data preprocessing and training to remove unexplainable discrimination.&nbsp;&nbsp;&nbsp; <\/p>\n\n\n\n<p>To understand bias in techniques such as machine learning, we will discuss in our next blog post another important aspect: fairness. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Our current public policy posts, focused on ethics and bias in current and emerging areas of AI, build on the work \u201cA Survey on Bias and Fairness in Machine Learning\u201d by Ninareh Mehrabi, et al. and resources provided by Barocas, et al. The guest co-author of this series of blog posts on AI and bias [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"categories":[12],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Discrimination and Bias - ACM SIGAI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Discrimination and Bias - ACM SIGAI\" \/>\n<meta property=\"og:description\" content=\"Our current public policy posts, focused on ethics and bias in current and emerging areas of AI, build on the work \u201cA Survey on Bias and Fairness in Machine Learning\u201d by Ninareh Mehrabi, et al. and resources provided by Barocas, et al. The guest co-author of this series of blog posts on AI and bias [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/\" \/>\n<meta property=\"og:site_name\" content=\"ACM SIGAI\" \/>\n<meta property=\"article:published_time\" content=\"2020-02-20T15:46:40+00:00\" \/>\n<meta name=\"author\" content=\"Larry Medsker\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Larry Medsker\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/\",\"url\":\"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/\",\"name\":\"Discrimination and Bias - ACM SIGAI\",\"isPartOf\":{\"@id\":\"https:\/\/sigai.acm.org\/main\/#website\"},\"datePublished\":\"2020-02-20T15:46:40+00:00\",\"author\":{\"@id\":\"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/5097a3e1c76f2c205fe0f5ebb9b51fdb\"},\"breadcrumb\":{\"@id\":\"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/sigai.acm.org\/main\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Discrimination and Bias\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/sigai.acm.org\/main\/#website\",\"url\":\"https:\/\/sigai.acm.org\/main\/\",\"name\":\"ACM SIGAI\",\"description\":\"ACM Special Interest Group on Artificial Intelligence\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/sigai.acm.org\/main\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/5097a3e1c76f2c205fe0f5ebb9b51fdb\",\"name\":\"Larry Medsker\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/a175bde07d4c8846a16bc64afa6e97f1?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/a175bde07d4c8846a16bc64afa6e97f1?s=96&d=mm&r=g\",\"caption\":\"Larry Medsker\"},\"url\":\"https:\/\/sigai.acm.org\/main\/author\/larrym\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Discrimination and Bias - ACM SIGAI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/","og_locale":"en_US","og_type":"article","og_title":"Discrimination and Bias - ACM SIGAI","og_description":"Our current public policy posts, focused on ethics and bias in current and emerging areas of AI, build on the work \u201cA Survey on Bias and Fairness in Machine Learning\u201d by Ninareh Mehrabi, et al. and resources provided by Barocas, et al. The guest co-author of this series of blog posts on AI and bias [&hellip;]","og_url":"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/","og_site_name":"ACM SIGAI","article_published_time":"2020-02-20T15:46:40+00:00","author":"Larry Medsker","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Larry Medsker","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/","url":"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/","name":"Discrimination and Bias - ACM SIGAI","isPartOf":{"@id":"https:\/\/sigai.acm.org\/main\/#website"},"datePublished":"2020-02-20T15:46:40+00:00","author":{"@id":"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/5097a3e1c76f2c205fe0f5ebb9b51fdb"},"breadcrumb":{"@id":"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/sigai.acm.org\/main\/2020\/02\/20\/discrimination-and-bias\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/sigai.acm.org\/main\/"},{"@type":"ListItem","position":2,"name":"Discrimination and Bias"}]},{"@type":"WebSite","@id":"https:\/\/sigai.acm.org\/main\/#website","url":"https:\/\/sigai.acm.org\/main\/","name":"ACM SIGAI","description":"ACM Special Interest Group on Artificial Intelligence","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/sigai.acm.org\/main\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/5097a3e1c76f2c205fe0f5ebb9b51fdb","name":"Larry Medsker","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/sigai.acm.org\/main\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/a175bde07d4c8846a16bc64afa6e97f1?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/a175bde07d4c8846a16bc64afa6e97f1?s=96&d=mm&r=g","caption":"Larry Medsker"},"url":"https:\/\/sigai.acm.org\/main\/author\/larrym\/"}]}},"_links":{"self":[{"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/posts\/547"}],"collection":[{"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/comments?post=547"}],"version-history":[{"count":0,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/posts\/547\/revisions"}],"wp:attachment":[{"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/media?parent=547"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/categories?post=547"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sigai.acm.org\/main\/wp-json\/wp\/v2\/tags?post=547"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}