{"id":143,"date":"2017-06-16T15:14:41","date_gmt":"2017-06-16T15:14:41","guid":{"rendered":"http:\/\/sigai.acm.org\/aimatters\/blog\/?p=143"},"modified":"2017-06-16T15:24:44","modified_gmt":"2017-06-16T15:24:44","slug":"algorithmic-accountability","status":"publish","type":"post","link":"https:\/\/sigai.acm.org\/aimatters\/blog\/2017\/06\/16\/algorithmic-accountability\/","title":{"rendered":"Algorithmic Accountability"},"content":{"rendered":"<p>The previous SIGAI public policy post covered the <a href=\"http:\/\/www.acm.org\/binaries\/content\/assets\/public-policy\/2017_joint_statement_algorithms.pdf\">USACM-EUACM joint statement<\/a> on Algorithmic Transparency and Accountability. Several interesting developments and opportunities are available for SIGAI members to discuss related topics. In particular, individuals and groups are calling for measures to provide independent oversight that might mitigate the dangers of biased, faulty, and malicious algorithms. Transparency is important for data systems and algorithms that guide life-critical systems such as healthcare, air traffic control, and nuclear control rooms. Ben Shneiderman\u2019s Turing lecture is highly recommended on this point:\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=UWuDgY8aHmU\">https:\/\/www.youtube.com\/watch?v=UWuDgY8aHmU<\/a><\/p>\n<p>A robust discussion on the SIGAI Public Policy blog would be great for exploring ideas on oversight measures. Additionally, we should weigh in on some fundamental questions such as those raised by Ed Felton in his recent <a href=\"https:\/\/freedom-to-tinker.com\/2017\/05\/31\/what-does-it-mean-to-ask-for-an-explainable-algorithm\/\">article<\/a> \u201cWhat does it mean to ask for an &#8216;explainable&#8217; algorithm?&#8221; He sets up an excellent framework for the discussion, and the comments about his article raise differing points of view we should consider.<\/p>\n<p>Felton says that \u201cone of the standard critiques of using algorithms for decision-making about people, and especially for consequential decisions about access to housing, credit, education, and so on, is that the algorithms don\u2019t provide an \u2018explanation\u2019 for their results or the results aren\u2019t \u2018interpretable.\u2019\u00a0\u00a0This is a serious issue, but discussions of it are often frustrating. The reason, I think, is that different people mean different things when they ask for an explanation of an algorithm\u2019s results\u201d. \u00a0Felton discusses four types of explainabilty:<br \/>\n1. \u00a0A\u00a0<em>claim of confidentiality (<\/em><em>institutional\/legal). <\/em>Someone withholds relevant information about how a decision is made.<br \/>\n<em>2. \u00a0Complexity (barrier to big picture understanding).\u00a0<\/em>Details about the algorithm are difficult to explain, but the impact of the results on a person can still be understood.<br \/>\n<em>3. \u00a0Unreasonableness (results don\u2019t make sense)<\/em>. The workings of the algorithm are clear, and are justified by statistical evidence, but the nature of how our world functions isn\u2019t clear.<br \/>\n<em>4. \u00a0Injustice (justification for designing the algorithm)<\/em>. Using the algorithm is unfair, unjust, or morally wrong.<\/p>\n<p>In addition, SIGAI should provide input on the nature of AI systems and what it means to \u201cexplain\u201d how decision-making AI technologies work \u2013 for example, the role of algorithms in supervised and unsupervised systems versus the choices of data and design options\u00a0in creating an operational system.<\/p>\n<p>Your comments are welcome. Also, please share what work you may be doing in the area of algorithmic transparency.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The previous SIGAI public policy post covered the USACM-EUACM joint statement on Algorithmic Transparency and Accountability. Several interesting developments and opportunities are available for SIGAI members to discuss related topics. In particular, individuals and groups are calling for measures to provide independent oversight that might mitigate the dangers of biased, faulty, and malicious algorithms. Transparency &hellip; <a href=\"https:\/\/sigai.acm.org\/aimatters\/blog\/2017\/06\/16\/algorithmic-accountability\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Algorithmic Accountability&#8221;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[4],"tags":[],"_links":{"self":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts\/143"}],"collection":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/comments?post=143"}],"version-history":[{"count":8,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts\/143\/revisions"}],"predecessor-version":[{"id":151,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts\/143\/revisions\/151"}],"wp:attachment":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/media?parent=143"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/categories?post=143"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/tags?post=143"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}