Events and Announcements

AAAI Policy Initiative

AAAI has established a new mailing list on US Policy that will focus exclusively on the discussion of US policy matters related to artificial intelligence. All members and affiliates are invited to join the list at https://aaai.org/Organization/mailing-lists.php

Participants will have the opportunity to subscribe or unsubscribe at any time. The mailing list will be moderated, and all posts will be approved before dissemination. This is a great opportunity for another productive partnership between AAAI and SIGAI policy work.

EPIC Panel on June 5th

A panel on AI, Human Rights, and US policy, will be hosted by the Electronic Privacy Information Center (EPIC) at their annual meeting (and celebration of 25th anniversary) on June 5, 2019, at the National Press Club in DC.  Our Lorraine Kisselburgh will join Harry Lewis (Harvard), Sherry Turkle (MIT), Lynne Parker (UTenn and White House OSTP director for AI), Sarah Box (OECD), and Bilyana Petkova (EPIC and Maastricht) to discuss AI policy directions for the US.  The event is free and open to the public. You can register at https://epic.org/events/June5AIpanel/

2019 ACM SIGAI Election Reminder

Please remember to vote and to review the information on http://www.acm.org/elections/sigs/voting-page. Please note that 16:00 UTC, 14 June 2019 is the deadline for submitting your vote. To access the secure voting site, you will enter your email address (the one associated with your ACM/SIG member record) to reach the menu of active SIG elections for which you are eligible. In the online menu, select your Special Interest Group and enter the 10-digit Unique Pin.

AI Research Roadmap

The Computing Community Consortium (CCC) is requesting comments on the draft of A 20-Year Community Roadmap for AI Research in the US. Please submit your comments here by May 28, 2019. See the AI Roadmap Website for more information. 

Here is a link to the whole report and links to individual sections:

     Title Page, Executive Summary, and Table of Contents 

  1. Introduction
  2. Major Societal Drivers for Future Artificial Intelligence Research 
  3. Overview of Core Technical Areas of AI Research Roadmap: Workshop Reports 
    1. Workshop I: A Research Roadmap for Integrated Intelligence 
    2. Workshop II: A Research Roadmap for Meaningful Interaction 
    3. Workshop III: A Research Roadmap for Self-Aware Learning 
  4. Major Findings 
  5. Recommendations
  6. Conclusions

Appendices (participants and contributors)

New Jobs in the Future of Work

As employers increasingly adopt automation technology, many workforce analysts look to jobs and career paths in new disciplines, especially data science and applications of AI, to absorb workers who are displaced by automation. By some accounts, data science is in first place for technology career opportunities. Estimating current and near-term numbers of data scientists and AI professionals is difficult because of different job titles and position descriptions used by organizations and job recruiters. Likewise, many employees in positions with traditional titles have transitioned to data science and AI work. Better estimates, and at least upper limits, are necessary for evidence-based predictions of unemployment rates due to automation over the next decade. McKinsey&Company estimates 375 million jobs will be lost globally due to AI and other automation technologies by 2030, and one school of thought in today’s public discourse is that at least that number of new jobs will be created. An issue for the AI community and policy makers is the nature, quality, and number of the new jobs – and how many data science and AI technology jobs will contribute to meeting the shortfall.

An article in KDnuggets by Gregory Piatetsky points out that a “Search for data scientist (without quotes) finds about 30,000 jobs, but we are not sure how many of those jobs are for scientists in other areas … a person employed to analyze and interpret complex digital data, such as the usage statistics of a website, especially in order to assist a business in its decision-making … titles include Data Scientist, Data Analyst , Statistician, Bioinformatician, Neuroscientist, Marketing executive, Computer scientist, etc…”  Data on this issue could clarify the net number of future jobs in AI, data science, and related areas. Computer science had a similar history with the boom in the new field followed by migration of computing into many other disciplines. Another factor is that “long-term, however, automation will be replacing many jobs in the industry, and Data Scientist job will not be an exception.  Already today companies like DataRobot and H2O offer automated solutions to Data Science problems. Respondents to KDnuggets  2015 Poll expected that most expert-level Predictive Analytics/Data Science tasks will be automated by 2025.  To stay employed, Data Scientists should focus on developing skills that are harder to automate, like business understanding, explanation, and story telling.” This issue is also important in estimating the number of new jobs by 2030 for displaced workers.

Kiran Garimella in his Forbes article “Job Loss From AI? There’s More To Fear!examines the scenario of not enough new jobs to replace ones lost through automation. His interesting perspective turns to economists, sociologists, and insightful policymakers “to re-examine and re-formulate their models of human interaction and organization and … re-think incentives and agency relationships.”

OpenAI

A recent controversy erupted over OpenAI’s new version of their language model for generating well-written next words of text based on unsupervised analysis of large samples of writing. Their announcement and decision not to follow open-source practices raises interesting policy issues about regulation and self-regulation of AI products. OpenAI, a non-profit AI research company founded by Elon Musk and others, announced on February 14, 2019, that “We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.”

The reactions to the announcement followed from the decision behind the following statement in the release: “Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

Examples of the many reactions are TechCrunch.com and Wired. The Electronic Frontier Foundation has an analysis of the manner of the release (letting journalists know first) and concludes, “when an otherwise respected research entity like OpenAI makes a unilateral decision to go against the trend of full release, it endangers the open publication norms that currently prevail in language understanding research.”

This issue is an example of previous ideas in our Public Policy blog about who, if anyone, should regulate AI developments and products that have potential negative impacts on society. Do we rely on self-regulation or require governmental regulations? What if the U.S. has regulations and other countries do not? Would a clearinghouse approach put profit-based pressure on developers and corporations? Can the open source movement be successful without regulatory assistance?

AI Hype Not

A recent item in Science|Business “Artificial intelligence nowhere near the real thing, says German AI chief”, by Éanna Kelly, gives policy-worthy warnings and ideas. “In his 20 years as head of Germany’s biggest AI research lab Wolfgang Wahlster has seen the tech hype machine splutter three times. As he hands over to a new CEO, he warns colleagues: ‘Don’t over-promise’.the computer scientist who has just ended a 20 year stint as CEO of the German Research Centre for Artificial Intelligence says that [warning] greatly underestimates the distance between AI and its human counterpart: ‘We’re years away from a game changer in the field. I always warn people, one should be a bit careful with what they claim. Every day you work on AI, you see the big gap between human intelligence and AI’, Wahlster told Science|Business.”

For AI policy, we should remember to look out for over promising, but we also need to be mindful of the time frame for making effective policy and be fully engaged now. Our effort importantly informs policymakers about the real opportunities to make AI successful.  A recent article in The Conversation by Ben Shneiderman “What alchemy and astrology can teach artificial intelligence researchers,” gives insightful information and advice on how to avoid being distracted away “… from where the real progress is already happening: in systems that enhance – rather than replace – human capabilities.” Shneiderman recommends that technology designers shift “from trying to replace or simulate human behavior in machines to building wildly successful applications that people love to use.”

AAII and EAAI

President Trump issued an Executive Order on February 11, 2019, entitled “Maintaining American Leadership In Artificial Intelligence”. The full text is at https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/The American AI Initiative of course needs analysis and implementation details. Two sections of the Executive Order give hope for opportunities to provide public input:

Sec (5)(a)(1)(i): Within 90 days of the date of this order, the OMB Director shall publish a notice in the Federal Register inviting the public to identify additional requests for access or quality improvements for Federal data and models that would improve AI R&D and testing. …[T]hese actions by OMB will help to identify datasets that will facilitate non-Federal AI R&D and testing.
and
Sec (6)(b)
(b)  To help ensure public trust in the development and implementation of AI applications, OMB shall issue a draft version of the memorandum for public comment before it is finalized.
Please stay tuned for ways that our ACM US Technology Policy Committee (USTPC) can help us provide our feedback on the implementation of the Executive Order.

A summary and analysis report is available from the Center for Data Innovation: Executive Order Will Help Ensure U.S. Leadership in AI. They comment that the administration “needs to do more than reprogram existing funds for AI research, skill development, and infrastructure development” and “should ask Congress for significant funding increases to (a) expand these research efforts;
(b) implement light-touch regulation for AI;
(c) resist calls to implement roadblocks or speed bumps for this technology, including export restrictions;
(d) rapidly expand adoption of AI within government,
implement comprehensive reforms to the nation’s workforce training and adjustment policies.”

The latter point was a topic in my invited talk at EAAI-19. Opportunities and innovation in education and training for the workforce of the future rely crucially on public policymaking about workers in the era of increasing use of AI and other automation technologies. An important issue is who will provide training that is timely (by 2030), practical, and affordable for workers who are impacted by job disruptions and transitioning to the new predicted post-automation jobs. The stakeholders along with workers are the schools, employers, unions, community groups, and others. Even if more jobs are created than lost, work in the AI future is disproportionately available in the range of people in the current and near-future workforce.

Section 1 of the Executive Order “Maintaining American Leadership In Artificial Intelligence” follows:
Section 1.  Policy and Principles.Artificial Intelligence (AI) promises to drive growth of the United States economy, enhance our economic and national security, and improve our quality of life. The United States is the world leader in AI research and development (R&D) and deployment.  Continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with our Nation’s values, policies, and priorities.  The Federal Government plays an important role in facilitating AI R&D, promoting the trust of the American people in the development and deployment of AI-related technologies, training a workforce capable of using AI in their occupations, and protecting the American AI technology base from attempted acquisition by strategic competitors and adversarial nations.  Maintaining American leadership in AI requires a concerted effort to promote advancements in technology and innovation, while protecting American technology, economic and national security, civil liberties, privacy, and American values and enhancing international and industry collaboration with foreign partners and allies.  It is the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy, the American AI Initiative (Initiative), guided by five principles:
(a)  The United States must drive technological breakthroughs in AI across the Federal Government, industry, and academia in order to promote scientific discovery, economic competitiveness, and national security.
(b)  The United States must drive development of appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by today’s industries.
(c)  The United States must train current and future generations of American workers with the skills to develop and apply AI technologies to prepare them for today’s economy and jobs of the future.
(d)  The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.
(e)  The United States must promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting our technological advantage in AI and protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.

Autonomous Vehicles: Policy and Technology

In 2018, we discussed language that aims at safety and degrees of autonomy rather than having, possibly unattainable, goals of completely autonomous things. A better approach, at least for the next 5-10 years, is to seek the correct balance between technology and humans in hybrid devices and systems. See for example, the Unmanned Integrated Systems Roadmap, 2017-2042 and Ethically Aligned Design. We also need to consider the limits and possibilities for research on the technologies and their impacts on time frames and the proper focus of policymaking.

In a recent interview, Dr. Harold Szu, a co-founder and former governor of the International Neural Network Society, discusses research ideas that better mimic human thinking and that could dramatically reduce the time to develop autonomous technology. He discusses a possible new level of brain-style computing that incorporates fuzzy membership functions into autonomous control systems. Autonomous technology incorporating human characteristics, along with safe policies and earlier arrival of brain-style technologies, could usher in the next big economic boom. For more details, view the Harold Szu interview.

Discussion Issues for 2019

FaceBook, Face Recognition, Autonomous Things, and the Future of Work

Four focus areas of discussions at the end of 2018 are the initial topics for the SIGAI Policy Blog as we start 2019.  The following, with links to resources, are important ongoing subjects for our Policy blogsite in the new year:

FaceBook continues to draw attention to the general issue of data privacy and the role of personal data in business models. Here are some good resources to check:
NY Times on FaceBook Privacy
FaceBook Partners
Spotify
Netflix

Facial recognition software is known to be flawed, having side effects of bias, unwanted surveillance, and other problems. The Safe Face Pledge, developed by the Algorithmic Justice League and Georgetown University Law Center of Privacy & Technology, is an example of emerging efforts to make organizations aware of problems with facial recognition products, for example in autonomous weapons systems and law enforcement agencies. The Safe Face Pledge asks that companies commit to safety in business practices and promote public policy for broad regulation and government oversight on facial recognition applications.

“Autonomous” Things: Degrees of Separation: The R&D for “autonomous” vehicles and other devices that dominate our daily lives pose challenges for technologies as well as for ethics and policy considerations. In 2018, we discussed language that aims at safety and degrees of autonomy rather than having, possibly unattainable, goals of completely autonomous things. A better approach may be to seek the correct balance between technology and humans in hybrid devices and systems. See for example, the Unmanned Integrated Systems Roadmap, 2017-2042 and Ethically Aligned Design.

The Future of Work and Education is a topic that not only tries to predict the workforce of the future, but also how society needs to prepare for it. Many experts believe that our current school systems are not up to the challenge and that industry and government programs are needed for the challenges emerging in just a few years. See, for example, writing by the Ford Foundation and the World Economic Forum.

We welcome your feedback and discussions as we enter the 2019 world of AI and policy!

Follow the Data

The Ethical Machine — Big Ideas for Designing Fairer AI and Algorithms – is a “project that presents ideas to encourage a discussion about designing fairer algorithms” of the Shorenstein Center on Media, Politics, and Public Policy, Harvard Kennedy School. The November 27, 2018, publication is “Follow the Data! Algorithmic Transparency Starts with Data Transparency” by Julia Stoyanovich and Bill Howe. Their focus is local and municipal governments and NGOs that deliver vital human services in health, housing, and mobility. In the article, they give a welcome emphasis on the role of data instead of the common focus these days on just algorithms. They write, “data is used to customize generic algorithms for specific situations—that is to say that algorithms are trained using data. The same algorithm may exhibit radically different behavior—make different predictions; make a different number of mistakes and even different kinds of mistakes—when trained on two different data sets. In other words, without access to the training data, it is impossible to know how an algorithm would actually behave.” See their article for more discussion on designing systems for data transparency.

US and European Policy
Adam Eisgrau, ACM Director of Global Policy and Public Affairs, published an update on the ACM US and Europe Policy Committees in the November 29 ACM MemberNetKey points are

Pew Report on Attitudes Toward Algorithms

Pew Research Center just released a report Public Attitudes Toward Computer Algorithmsreported by Aaron Smith, on Americans’ concerns about fairness and effectiveness in making important decisions. The report says “This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual … the survey presented respondents with four different scenarios in which computers make decisions by collecting and analyzing large quantities of public and private data. Each of these scenarios were based on real-world examples of algorithmic decision-making … and included: a personal finance score used to offer consumers deals or discounts; a criminal risk assessment of people up for parole; an automated resume screening program for job applicants; and a computer-based analysis of job interviews. The survey also included questions about the content that users are exposed to on social media platforms as a way to gauge opinions of more consumer-facing algorithms.”
The report is available at http://www.pewinternet.org/2018/11/16/public-attitudes-toward-computer-algorithms/

Legal AI

AI is impacting law and policy issues as both a tool and a subject area. Advances in AI provide tools for carrying out legal work in business and government, and the use of AI in all parts of society is creating new demands and challenges for the legal profession.

Lawyers and AI Tools

In a recent study, “20 top US corporate lawyers with decades of experience in corporate law and contract review were pitted against an AI. Their task was to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals.” The LawGeex AI system attempted correct identification of basic legal principles in contracts The results suggest that AI systems can produce higher accuracy in shorter times compared to lawyers. As with other areas of AI applications, issues include trust in automation to make skilled legal decisions, safety in using AI systems, and impacts on the workforce of the future. For legal work, AI systems potentially reduce the time needed for high-volume and low-risk contracts and give lawyers more time to work on less mundane work. Policies should focus on automation where possible and safe, and the AI for legal work is another example of the need for collaborative roles for human and AI systems.

AI Impact on Litigation

The other side of tools and content is the emerging litigation in all parts of society from the use of AI. Understanding the nature of adaptive AI systems can be crucial for fact-finders and difficult to explain to non-experts. Smart policymaking needs to make clear the liability issues and ethics in cases involving the use of AI technology. Artificial Intelligence and the Role of Expert Witnesses in AI Litigation by Dani Alexis Ryskamp, writing for The Expert Institute,  discusses artificial intelligence in civil claims and the role of expert witnesses in elucidating the complexities of the technology in the context of litigation. “Over the past few decades, everything from motor vehicles to household appliances has become more complex and, in many cases, artificial intelligence only adds to that complexity. For end-users of AI products, determining what went wrong and whose negligence was responsible can be bafflingly complex. Experts retained in AI cases typically come from fields like computer or mechanical engineering, information systems, data analysis, robotics, and programming. They may specialize in questions surrounding hardware, software, 3D-printing, biomechanics, Bayesian logic, e-commerce, or other disciplines. The European Commission recently considered the question of whether to give legal status to certain robots. One of the issues weighed in the decision involved legal liability: if an AI-based robot or system, acting autonomously, injures a person, who is liable?”