AI Regulation

With AI in the news so much over the past year, the public awareness of potential problems arising from the proliferation of AI systems and products has led to increasing calls for regulation. The popular media, and even technical media, contain misinformation and misplaced fears, but plenty of legitimate issues exist even if their relative importance is sometimes misunderstood. Policymakers, researchers, and developers need to be in dialog about the true needs and potential dangers of regulation.

Google top lawyer pushes back against one-size-fits-all rules for AI” by Janosch Delcker at POLITICO is an example of corporate reaction to the calls for regulation. “Understanding exactly the applications that we see for AI, and how those should be regulated, that’s an important next chapter,” Kent Walker, Google’s senior vice president for global affairs and the company’s chief legal officer, told POLITICO during a recent visit to Germany. “But you generally don’t want one-size-fits-all regulation, especially for a tool that is going to be used in a lot of different ways,” he added.

From our policy perspective, the significant risks from AI systems include misuse and faulty unsafe designs that can create bias, non-transparency of use, and loss of privacy.  AI systems are known to discriminate against minorities, unintentionally and not. An important discussion we should be having is if governments, international organizations, and big corporations, which have already released dozens of non-binding guidelines for the responsible development and use of AI, are the best entities for writing and enforcing regulations. Non-binding principles will not make some companies developing and applying AI products accountable. An important point in this regard is to hold companies responsible for the product design process itself, not just for testing products after they are in use.

Introduction of new government regulations is a long process and subject to pressure from lobbyists, and the current US administration is generally inclined against regulations anyway. We should discuss alternatives like clearinghouses and consumer groups endorsing AI products designed for safety and ethical use. If well publicized, the endorsements of respected non-partisan groups including professional societies might be more effective and timely than government regulations. The European Union has released its Ethics Guidelines for Trustworthy AI, and a second document with recommendations on how to boost investment in Europe’s AI industry is to be published. In May, 2019, the Organization for Economic Cooperation and Development (OECD) issued their first set of international OECD Principles on Artificial Intelligence, which are embraced by the United State and leading AI companies.

Leave a Reply

Your email address will not be published.