{"id":492,"date":"2019-06-18T17:09:48","date_gmt":"2019-06-18T17:09:48","guid":{"rendered":"http:\/\/sigai.acm.org\/aimatters\/blog\/?p=492"},"modified":"2019-06-18T17:34:34","modified_gmt":"2019-06-18T17:34:34","slug":"ai-regulation","status":"publish","type":"post","link":"https:\/\/sigai.acm.org\/aimatters\/blog\/2019\/06\/18\/ai-regulation\/","title":{"rendered":"AI Regulation"},"content":{"rendered":"\n<p>With AI in the news so much over the past year, the public awareness of potential problems arising from the proliferation of AI systems and products has led to increasing calls for regulation. The popular media, and even technical media, contain misinformation and misplaced fears, but plenty of legitimate issues exist even if their relative importance is sometimes misunderstood. Policymakers, researchers, and developers need to be in dialog about the true needs and potential dangers of regulation.<\/p>\n\n\n\n<p>\u201c<a href=\"https:\/\/www.politico.eu\/?s=Google+top+lawyer+pushes+back+against+one-size-fits-all+rules+for+AI\">Google top lawyer pushes back against one-size-fits-all rules for AI<\/a>\u201d by Janosch Delcker at POLITICO is an example of corporate reaction to the calls for regulation. \u201cUnderstanding exactly the applications that we see for AI, and how those should be regulated, that\u2019s an important next chapter,\u201d&nbsp;Kent Walker, Google\u2019s senior vice president for global affairs and the company\u2019s chief legal officer, told POLITICO during a recent visit to Germany. \u201cBut you generally don\u2019t want one-size-fits-all regulation, especially for a tool that is going to be used in a lot of different ways,\u201d he added.<\/p>\n\n\n\n<p>From our policy perspective, the significant risks from AI systems include misuse and faulty unsafe designs that can create bias, non-transparency of use, and loss of privacy. &nbsp;AI systems are known to discriminate against minorities, unintentionally and not. An important discussion we should be having is if governments, international organizations, and&nbsp;big corporations, which&nbsp;have already released dozens of non-binding guidelines for the responsible development and use of AI, are the best entities for writing and enforcing regulations. Non-binding principles will not make some companies developing and applying AI products accountable. An important point in this regard is to hold companies responsible for the product design process itself, not just for testing products after they are in use.<\/p>\n\n\n\n<p>Introduction of new government regulations is a long process and subject to pressure from lobbyists, and the current US administration is generally inclined against regulations anyway. We should discuss alternatives like clearinghouses and consumer groups endorsing AI products designed for safety and ethical use. If well publicized, the endorsements of respected non-partisan groups including professional societies might be more effective and timely than government regulations. The European Union has released its <a href=\"https:\/\/ec.europa.eu\/digital-single-market\/en\/news\/ethics-guidelines-trustworthy-ai\">Ethics Guidelines for Trustworthy AI<\/a>, and a second document with recommendations on how to boost investment in Europe\u2019s AI industry is to be published. In May, 2019, the Organization for Economic Cooperation and Development (OECD) issued their first set of international <a href=\"https:\/\/www.oecd.org\/science\/forty-two-countries-adopt-new-oecd-principles-on-artificial-intelligence.htm\">OECD Principles on Artificial Intelligence<\/a>, which are embraced by the United State and leading AI companies. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>With AI in the news so much over the past year, the public awareness of potential problems arising from the proliferation of AI systems and products has led to increasing calls for regulation. The popular media, and even technical media, contain misinformation and misplaced fears, but plenty of legitimate issues exist even if their relative &hellip; <a href=\"https:\/\/sigai.acm.org\/aimatters\/blog\/2019\/06\/18\/ai-regulation\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;AI Regulation&#8221;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[4],"tags":[],"_links":{"self":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts\/492"}],"collection":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/comments?post=492"}],"version-history":[{"count":6,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts\/492\/revisions"}],"predecessor-version":[{"id":499,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/posts\/492\/revisions\/499"}],"wp:attachment":[{"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/media?parent=492"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/categories?post=492"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sigai.acm.org\/aimatters\/blog\/wp-json\/wp\/v2\/tags?post=492"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}