AI Matters: our blog
AI Regulation
With AI in the news so much over the past year, the public awareness of potential problems arising from the proliferation of AI systems and products has led to increasing calls for regulation. The popular media, and even technical media, contain misinformation and misplaced fears, but plenty of legitimate issues exist even if their relative importance is sometimes misunderstood. Policymakers, researchers, and developers need to be in dialog about the true needs and potential dangers of regulation.
“Google top lawyer pushes back against one-size-fits-all rules for AI” by Janosch Delcker at POLITICO is an example of corporate reaction to the calls for regulation. “Understanding exactly the applications that we see for AI, and how those should be regulated, that’s an important next chapter,” Kent Walker, Google’s senior vice president for global affairs and the company’s chief legal officer, told POLITICO during a recent visit to Germany. “But you generally don’t want one-size-fits-all regulation, especially for a tool that is going to be used in a lot of different ways,” he added.
From our policy perspective, the significant risks from AI systems include misuse and faulty unsafe designs that can create bias, non-transparency of use, and loss of privacy. AI systems are known to discriminate against minorities, unintentionally and not. An important discussion we should be having is if governments, international organizations, and big corporations, which have already released dozens of non-binding guidelines for the responsible development and use of AI, are the best entities for writing and enforcing regulations. Non-binding principles will not make some companies developing and applying AI products accountable. An important point in this regard is to hold companies responsible for the product design process itself, not just for testing products after they are in use.
Introduction of new government regulations is a long process and subject to pressure from lobbyists, and the current US administration is generally inclined against regulations anyway. We should discuss alternatives like clearinghouses and consumer groups endorsing AI products designed for safety and ethical use. If well publicized, the endorsements of respected non-partisan groups including professional societies might be more effective and timely than government regulations. The European Union has released its Ethics Guidelines for Trustworthy AI, and a second document with recommendations on how to boost investment in Europe’s AI industry is to be published. In May, 2019, the Organization for Economic Cooperation and Development (OECD) issued their first set of international OECD Principles on Artificial Intelligence, which are embraced by the United State and leading AI companies.
Events and Announcements
AAAI Policy Initiative
AAAI has established a new mailing list on US Policy that will focus exclusively on the discussion of US policy matters related to artificial intelligence. All members and affiliates are invited to join the list at https://aaai.org/Organization/mailing-lists.php
Participants will have the opportunity to subscribe or unsubscribe at any time. The mailing list will be moderated, and all posts will be approved before dissemination. This is a great opportunity for another productive partnership between AAAI and SIGAI policy work.
EPIC Panel on June 5th

A panel on AI, Human Rights, and US policy, will be hosted by the Electronic Privacy Information Center (EPIC) at their annual meeting (and celebration of 25th anniversary) on June 5, 2019, at the National Press Club in DC. Our Lorraine Kisselburgh will join Harry Lewis (Harvard), Sherry Turkle (MIT), Lynne Parker (UTenn and White House OSTP director for AI), Sarah Box (OECD), and Bilyana Petkova (EPIC and Maastricht) to discuss AI policy directions for the US. The event is free and open to the public. You can register at https://epic.org/events/June5AIpanel/
2019 ACM SIGAI Election Reminder
Please remember to vote and to review the information on http://www.acm.org/elections/sigs/voting-page. Please note that 16:00 UTC, 14 June 2019 is the deadline for submitting your vote. To access the secure voting site, you will enter your email address (the one associated with your ACM/SIG member record) to reach the menu of active SIG elections for which you are eligible. In the online menu, select your Special Interest Group and enter the 10-digit Unique Pin.
AI Research Roadmap
The Computing Community Consortium (CCC) is requesting comments on the draft of A 20-Year Community Roadmap for AI Research in the US. Please submit your comments here by May 28, 2019. See the AI Roadmap Website for more information.
Here is a link to the whole report and links to individual sections:
AI Researchers Win Turing Award
We are pleased to announce that the recipients of the 2018 ACM A.M. Turing Award are AI researchers Yoshua Bengio, Professor at the University of Montreal and Scientific Director at Mila; Geoffrey Hinton, Professor at the University of Toronto and Chief Scientific Advisor at the Vector Institute; and Yann LeCun, Professor at New York University and Chief AI Scientist at Facebook.
Their citation reads as follows:
For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.
Bengio, Hinton, and LeCun will be presented with the Turing Award at the June 15, 2019 ACM Awards Banquet in San Francisco.
Please see https://awards.acm.org/about/2018-turing for more information.
New Jobs in the Future of Work
As employers increasingly adopt automation technology, many workforce analysts look to jobs and career paths in new disciplines, especially data science and applications of AI, to absorb workers who are displaced by automation. By some accounts, data science is in first place for technology career opportunities. Estimating current and near-term numbers of data scientists and AI professionals is difficult because of different job titles and position descriptions used by organizations and job recruiters. Likewise, many employees in positions with traditional titles have transitioned to data science and AI work. Better estimates, and at least upper limits, are necessary for evidence-based predictions of unemployment rates due to automation over the next decade. McKinsey&Company estimates 375 million jobs will be lost globally due to AI and other automation technologies by 2030, and one school of thought in today’s public discourse is that at least that number of new jobs will be created. An issue for the AI community and policy makers is the nature, quality, and number of the new jobs – and how many data science and AI technology jobs will contribute to meeting the shortfall.
An article in KDnuggets by Gregory Piatetsky points out that a “Search for data scientist (without quotes) finds about 30,000 jobs, but we are not sure how many of those jobs are for scientists in other areas … a person employed to analyze and interpret complex digital data, such as the usage statistics of a website, especially in order to assist a business in its decision-making … titles include Data Scientist, Data Analyst , Statistician, Bioinformatician, Neuroscientist, Marketing executive, Computer scientist, etc…” Data on this issue could clarify the net number of future jobs in AI, data science, and related areas. Computer science had a similar history with the boom in the new field followed by migration of computing into many other disciplines. Another factor is that “long-term, however, automation will be replacing many jobs in the industry, and Data Scientist job will not be an exception. Already today companies like DataRobot and H2O offer automated solutions to Data Science problems. Respondents to KDnuggets 2015 Poll expected that most expert-level Predictive Analytics/Data Science tasks will be automated by 2025. To stay employed, Data Scientists should focus on developing skills that are harder to automate, like business understanding, explanation, and story telling.” This issue is also important in estimating the number of new jobs by 2030 for displaced workers.
Kiran Garimella in his Forbes article “Job Loss From AI? There’s More To Fear!” examines the scenario of not enough new jobs to replace ones lost through automation. His interesting perspective turns to economists, sociologists, and insightful policymakers “to re-examine and re-formulate their models of human interaction and organization and … re-think incentives and agency relationships.”
OpenAI
A recent controversy erupted over OpenAI’s new version of their language model for generating well-written next words of text based on unsupervised analysis of large samples of writing. Their announcement and decision not to follow open-source practices raises interesting policy issues about regulation and self-regulation of AI products. OpenAI, a non-profit AI research company founded by Elon Musk and others, announced on February 14, 2019, that “We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.”
The reactions to the announcement followed from the decision behind the following statement in the release: “Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”
Examples of the many reactions are TechCrunch.com and Wired. The Electronic Frontier Foundation has an analysis of the manner of the release (letting journalists know first) and concludes, “when an otherwise respected research entity like OpenAI makes a unilateral decision to go against the trend of full release, it endangers the open publication norms that currently prevail in language understanding research.”
This issue is an example of previous ideas in our Public Policy blog about who, if anyone, should regulate AI developments and products that have potential negative impacts on society. Do we rely on self-regulation or require governmental regulations? What if the U.S. has regulations and other countries do not? Would a clearinghouse approach put profit-based pressure on developers and corporations? Can the open source movement be successful without regulatory assistance?