{"id":370,"date":"2018-10-03T21:14:09","date_gmt":"2018-10-03T21:14:09","guid":{"rendered":"http:\/\/sigai.acm.org\/aimatters\/blog\/?p=370"},"modified":"2018-10-03T21:14:09","modified_gmt":"2018-10-03T21:14:09","slug":"ml-safety-by-design","status":"publish","type":"post","link":"https:\/\/sigai.acm.org\/aimatters\/blog\/2018\/10\/03\/ml-safety-by-design\/","title":{"rendered":"ML Safety by Design"},"content":{"rendered":"<p>In a recent post, we discussed the need for policymakers to think of AI and Autonomous Systems (AI\/AS) always needing varying degrees of the human role (\u201chybrid\u201d human\/machine systems). Understanding the potential and limitations of combining technologies and humans is important for realistic policymaking. A key element, along with accurate forecasts of the changes in technology, is the safety of AI\/AS-Human products as discussed in the IEEE report \u201c<a href=\"http:\/\/standards.ieee.org\/develop\/indconn\/ec\/ead_v1.pdf\">Ethically Aligned Design<\/a>\u201d, which is subtitled \u201cA Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems\u201d, and Ben Shneiderman\u2019s excellent\u00a0<a href=\"http:\/\/theinstitute.ieee.org\/ieee-roundup\/blogs\/blog\/applauding-ieees-efforts-in-establishing-artificial-intelligence-guidelines\">summary and comments<\/a>\u00a0on the report as well as the\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=UWuDgY8aHmU\">YouTube video<\/a>\u00a0of his Turing Institute Lecture on \u201cAlgorithmic Accountability: Design for Safety\u201d.<\/p>\n<p>In Shneiderman\u2019s <a href=\"http:\/\/www.pnas.org\/content\/113\/48\/13538.full\">proposal<\/a>\u00a0for a National Algorithms Safety Board, he writes \u201cWhat might help are traditional forms of independent oversight that use knowledgeable people who have powerful tools to anticipate, monitor, and retrospectively review operations of vital national services. The three forms of independent oversight that have been used in the past by industry and governments\u2014planning oversight, continuous monitoring by knowledgeable review boards using advanced software, and a retrospective analysis of disasters\u2014provide guidance for responsible technology leaders and concerned policy makers. Considering all three forms of oversight could lead to policies that prevent inadequate designs, biased outcomes, or criminal actions.\u201d<\/p>\n<p>Efforts to provide \u201csafety by design\u201d include work at Google on <a href=\"https:\/\/medium.com\/google-design\/human-centered-machine-learning-a770d10562cd\">Human-Centered Machine Learning<\/a> and a general \u201c<a href=\"https:\/\/cloud.google.com\/inclusive-ml\/\">human-centered approach<\/a> that foregrounds\u00a0<a href=\"https:\/\/ai.google\/education\/responsible-ai-practices\">responsible AI practices<\/a>\u00a0and products that work well for all people and contexts. These values of responsible and inclusive AI are at the core of the AutoML suite of machine learning products &#8230;\u201d<br \/>\nFurther work is needed to systemize and enforce good practices in human-centered AI design and development, including algorithmic transparency and guidance for selection of unbiased data used in machine learning systems.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In a recent post, we discussed the need for policymakers to think of AI and Autonomous Systems (AI\/AS) always needing varying degrees of the human role (\u201chybrid\u201d human\/machine systems). Understanding the potential and limitations of combining technologies and humans is important for realistic policymaking. A key element, along with accurate forecasts of the changes in &hellip; <a href=\"https:\/\/sigai.acm.org\/aimatters\/blog\/2018\/10\/03\/ml-safety-by-design\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;ML Safety by Design&#8221;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[4],"tags":[],"_links":{"self":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts\/370"}],"collection":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/comments?post=370"}],"version-history":[{"count":1,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts\/370\/revisions"}],"predecessor-version":[{"id":371,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts\/370\/revisions\/371"}],"wp:attachment":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/media?parent=370"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/categories?post=370"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/tags?post=370"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}