AI Revolution or Evolution

An interesting IEEE Spectrum article “AI and Economic Productivity: Expect Evolution, Not Revolution” by Jeffrey Funk questions popular claims about the rapid pace of AI’s impact on productivity and the economy. He asserts that “Despite the hype, artificial intelligence will take years to significantly boost economic productivity”. If correct, this will have serious implications for public policymaking who have chosen due be proactive. The article raises good points, but many of the examples do not look like real AI, at least as a dominant component. Putting “smart” in the name of a product doesn’t make it AI, and automation doesn’t necessarily use AI. 

On a broader note, we should care about the technology language we use and be aware of the usual practices in commercialization. As discussed in previous blog posts, expanding too far the meanings of terms like AI, machine learning, and algorithms makes rational discourse more difficult. Some of us remember marketing of expert systems and relational databases: companies do a disservice to society by claiming each breakthrough technology actually is in their products. Here we go again — today about anything counts as AI depending on the point you want to make and the products you want to sell. 

Another issue raised by the article is from the emphasis on startups as the leaders of economic impact, as opposed to the results of innovations from established industry and government labs. Technologies have adoption curves, going from early adopters through the laggards, of about seven years. If you add to that the difficulties of making a startup succeed, a decade or so is probably the minimum timescale for large impact on the economy. A better perspective on revolution versus evolution could come from longitudinal evaluations looking at trends. In that case, a good endpoint for a hypothesis about dramatic impact on productivity might be the 2030-2035 timeframe. 

A problem with using a vague or broad notion of AI is that policymakers could miss the revolutionary impact of data science, which can, but may not, involve real AI. Data science probably has the best chance of dramatically impacting society and the economy in the short and long terms and has the advantage of not having to involve designing and manufacturing physical objects, and thus not always having to wait for consumers to adopt new products. Data Science is already affecting society and employment with obvious, and not so obvious, revolutionary impacts on our lives.

PCAST and AI Plan

Executive Order on The President’s Council of Advisors on Science and Technology (PCAST)

President Trump issued an executive order on October 22 re-establishing the President’s Council of Advisors on Science and Technology (PCAST), an advisory body that consists of science and technology leaders from the private and academic sectors. PCAST is to be chaired by Kelvin Droegemeier, director of the Office of Science and Technology Policy, and Edward McGinnis, formerly with DOE, is to serve as the executive director. The majority of the 16 members are from key industry sectors. The executive order says that the council is expected to address “strengthening American leadership in science and technology, building the Workforce of the Future, and supporting foundational research and development across the country.” For more information, see the Inside Education article about the first appointments.

Schumer AI Plan

Jeffrey Mervis has a November 11, 2019, article in AAAS News from Science on a recommendation for the government to create a new agency funded with $100 billion over 5 years for basic AI research. “Senator Charles Schumer (D–NY) says the initiative would enable the United States to keep pace with China and Russia in a critical research arena and plug gaps in what U.S. companies are unwilling to finance.”

Schumer gave his ideas publicly in a speech in early November to senior national security and research policymakers following a recent presidential executive order. He wants to create a new national science tech fund looking into “fundamental research related to AI and some other cutting-edge areas” such as quantum computing, 5G networks, robotics, cybersecurity, and biotechnology. Funds would encourage research at U.S. universities, companies, and other federal agencies and support incubators for moving research into commercial products. An additional article can be found in Defense News.

Work Transition

AI and other automation technologies have great promise for benefitting society and enhancing productivity, but appropriate policies by companies and governments are needed to help manage workforce transitions and make them as smooth as possible. The McKinsey Global Institute report AI, automation, and the future of work: Ten things to solve for states that “There is work for everyone today and there will be work for everyone tomorrow, even in a future with automation. Yet that work will be different, requiring new skills, and a far greater adaptability of the workforce than we have seen. Training and retraining both mid-career workers and new generations for the coming challenges will be an imperative. Government, private-sector leaders, and innovators all need to work together to better coordinate public and private initiatives, including creating the right incentives to invest more in human capital. The future with automation and AI will be challenging, but a much richer one if we harness the technologies with aplomb—and mitigate the negative effects.” They list likely actionable and scalable solutions in several key areas:

1. Ensuring robust economic and productivity growth

2. Fostering business dynamism

3. Evolving education systems and learning for a changed workplace

4. Investing in human capital

5. Improving labor-market dynamism

6. Redesigning work

7. Rethinking incomes

8. Rethinking transition support and safety nets for workers affected

9. Investing in drivers of demand for work

10. Embracing AI and automation safely

In redesigning work and rethinking incomes, we have the chance for bold ideas that improve the lives of workers and give them more interesting jobs that could provide meaning, purpose, and dignity.

Some of the categories of new jobs that could replace old jobs are
1. Making, designing, and coding in AI, data science, and engineering occupations
2. Working in new types of non-AI jobs that are enhanced by AI, making unpleasant old jobs more palatable or providing new jobs that are more interesting; the gig economy and crowdsourcing ideas are examples that could provide creative employment options
3. Providing living wages for people to do things they love; for example, in the arts through dramatic funding increases for NEA and NEH programs. Grants to individual artists and musicians, professional and amateur musical organizations, and informal arts education initiatives could enrich communities while providing income for millions of people. Policies to implement this idea could be one piece of the future-of-work puzzle and be much more preferable for the economy and society than allowing large-scale unemployment due to loss of work from automation.

National AI Strategy

The National Artificial Intelligence Research and Development Strategic Plan – an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council – was released in June, 2019, and the President’s, Executive Order 13859 Maintaining American Leadership in Artificial Intelligence was released on February 11. The Computing Community Consortium (CCC) recently released the AI Roadmap Website, and an interesting industry response is “Intel Gets Specific on a National Strategy for AI, “How to Propel the US into a Sustainable Leadership Position on the Global Artificial Intelligence Stage” By Naveen Rao and David Hoffman. Excerpts follow and the accompanying links provide the details:

“AI is more than a matter of making good technology; it is also a matter of making good policy. And that’s what a robust national AI strategy will do: continue to unlock the potential of AI, prepare for AI’s many ramifications, and keep the U.S. among leading AI countries. At least 20 other countries have published, and often funded, their national AI strategies. Last month, the administration signaled its commitment to U.S. leadership in AI by issuing an executive order to launch the American AI Initiative, focusing federal government resources to develop AI. Now it’s time to take the next step and bring industry and government together to develop a fully realized U.S. national strategy to continue leading AI innovation.

“… But to sustain leadership and effectively manage the broad social implications of AI, the U.S. needs coordination across government, academia, industry and civil society. This challenge is too big for silos, and it requires that technologists and policymakers work together and understand each other’s worlds.” Their call to action was released in May 2018.

Four Key Pillars

“Our recommendation for a national AI strategy lays out four key responsibilities for government. Within each of these areas we propose actionable steps. We provide some highlights here, and we encourage you to read the full white paper or scan the shorter fact sheet.

Sustainable and funded government AI research and development can help to advance the capabilities of AI in areas such as healthcare, cybersecurity, national security and education, but there need to be clear ethical guidelines.

Create new employment opportunities and protect people’s welfare given that AI has the potential to automate certain work activities.

Liberate and share data responsibly, as the more data that is available, the more “intelligent” an AI system can become. But we need guardrails.

Remove barriers and create a legal and policy environment that supports AI so that the responsible development and use of AI is not inadvertently derailed.”

AI Race Matters

China, the European Union, and the United States have been in the news about strategic plans and policies on the future of AI. The July 2 AI Matters policy blog post was on the U.S. National Artificial Intelligence Research and Development Strategic Plan, released in June, as an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council. The Computing Community Consortium (CCC) recently released the AI Roadmap Website.
Now, a Center for Data Innovation Report compares the current standings of China, the European Union, and the United States and makes policy recommendations. Here is the report summary: “Many nations are racing to achieve a global innovation advantage in artificial intelligence (AI) because they understand that AI is a foundational technology that can boost competitiveness, increase productivity, protect national security, and help solve societal challenges. This report compares China, the European Union, and the United States in terms of their relative standing in the AI economy by examining six categories of metrics—talent, research, development, adoption, data, and hardware. It finds that despite China’s bold AI initiative, the United States still leads in absolute terms. China comes in second, and the European Union lags further behind. This order could change in coming years as China appears to be making more rapid progress than either the United States or the European Union. Nonetheless, when controlling for the size of the labor force in the three regions, the current U.S. lead becomes even larger, while China drops to third place, behind the European Union. This report also offers a range of policy recommendations to help each nation or region improve its AI capabilities.”

About Face

Face recognition R&D has made great progress in recent years and has been prominent in the news. In public policy many are calling for a reversal of the trajectory for FR systems and products. In the hands of people of good will – using products designed for safety and training systems with appropriate data – society and individuals could have a better life. The Verge reports China’s use of unique facial markings of pandas to identify individual animals. FR research includes work to mitigate negative outcomes, as with the Adobe and UC Berkeley work on Detecting Facial Manipulations in Adobe Photoshop: automatic detect when images of faces have been manipulated by use of splicing, cloning, and removing an object.

Intentional and unintentional application of systems that are not designed and trained for ethical use are a threat to society. Screening for terrorists could be good, but FR lie and fraud detection systems may not work properly. The safety of FR is currently an important issue for policymakers, but regulations could have negative consequences for AI researchers. As with many contemporary issues, conflicts arise because of conflicting policies in different countries.

Recent and current legislation is attempting to restrict FR the use and possibly research.
* San Francisco, CA and Somerville, MA, and Oakland, CA, are the first three cities to limit use of FR to identify people.
* “Facial recognition may be banned from public housing thanks to proposed law” – CNET reports that a bill will be introduced to address the issue that “… landlords across the country continue to install smart home technology and tenants worry about unchecked surveillance, there’s been growing concern about facial recognition arriving at people’s doorsteps.”
* The major social media companies are being pressed on “how they plan to handle the threat of deepfake images and videos on their platforms ahead of the 2020 elections.”
* A call for a more comprehensive ban on FR has been launched by the digital rights group Fight for the Future, seeking a complete Federal ban on government use of facial recognition surveillance.

Beyond legislation against FR research and banning certain products, work is in progress to enable safe and ethical use of FR. A more general example that could be applied to FR is the MITRE work The Ethical Framework for the Use of Consumer-Generated Data in Health Care, which “establishes ethical values, principles, and guidelines to guide the use of Consumer-Generated Data for health care purposes.”

US and G20 AI Policy

The past few weeks have been busy with government events and announcements on AI Policy.

The G20 on AI

Ministers from the Group of 20 major economies conducted meetings on trade and the digital economy. They produced guiding principles for using artificial intelligence based on principles adopted last month by the 36-member OECD and an additional six countries. The G20 guidelines call for users and developers of AI to be fair and accountable, with transparent decision-making processes and to respect the rule of law and values including privacy, equality, diversity and internationally recognized labor rights. Meanwhile, the principles also urge governments to ensure a fair transition for workers through training programs and access to new job opportunities.

Bipartisan Group of Legislators Act on “Deepfake” Videos

The senators introduced legislation Friday intended to lessen the threat posed by “deepfake” videos — those created with AI technologies to manipulate original videos and produce misleading information. With this legislation, the Department of Homeland Security would conduct an annual study of deepfakes and related content and require the department to assess the AI technologies used to create deepfakes. This could lead to changes to regulations or new regulations impacting the use of AI.

Hearing on Societal and Ethical Implications of AI

The House Science, Space and Technology Committee held a hearing.  June 26th  on the societal and ethical implications of artificial intelligence, now available on video.

The National Artificial Intelligence Research and Development Strategic Plan, released in June, is an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council.

On February 11, 2019, the President signed Executive Order 13859 Maintaining American Leadership in Artificial Intelligence. According to Michael Kratsios, Deputy Assistant to the President for Technology Policy, this order “launched the American AI Initiative, which is a concerted effort to promote and protect AI technology and innovation in the United States. The Initiative implements a whole-of-government strategy in collaboration and engagement with the private sector, academia, the public, and likeminded international partners. Among other actions, key directives in the Initiative call for Federal agencies to prioritize AI research and development (R&D) investments, enhance access to high-quality cyberinfrastructure and data, ensure that the Nation leads in the development of technical standards for AI, and provide education and training opportunities to prepare the American workforce for the new era of AI.

“The first seven strategies continue from the 2016 Plan, reflecting the reaffirmation of the importance of these strategies by multiple respondents from the public and government, with no calls to remove any of the strategies. The eighth strategy is new and focuses on the increasing importance of effective partnerships between the Federal Government and academia, industry, other non-Federal entities, and international allies to generate technological breakthroughs in AI and to rapidly transition those breakthroughs into capabilities.”

Strategy 8: Expand Public–Private Partnerships to Accelerate Advances in AI is new in the June, 2019, plan and “reflects the growing importance of public-private partnerships enabling AI R&D an expands public-private partnerships to accelerate advances in AI. Promote opportunities for sustained investment in AI R&D and for transitioning advances into practical capabilities, in collaboration with academia, industry, international partners, and other non-Federal entities.”

Continued points from the seven Strategies in the previous Executive Order in February include

1. support for the development of instructional materials and teacher professional development in computer science at all levels, with emphasis at the K–12 levels

2. consideration of AI as a priority area within existing Federal fellowship and service programs

3. development AI techniques for human augmentation

4. emphasis on achieving trust: AI system designers need to create accurate, reliable systems with informative, user-friendly interfaces.

The National Science and Technology Council (NSTC) is functioning again. NSTC is the principal means by which the Executive Branch coordinates science and technology policy across the diverse entities that make up the Federal research and development enterprise. A primary objective of the NSTC is to ensure that science and technology policy decisions and programs are consistent with the President’s stated goals. The NSTC prepares research and development strategies that are coordinated across Federal agencies aimed at accomplishing multiple national goals. The work of the NSTC is organized under committees that oversee subcommittees and working groups focused on different aspects of science and technology. More information is available at https://www.whitehouse.gov/ostp/nstc.

The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology Policy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office of the President with advice on the scientific, engineering, and technological aspects of the economy, national security, homeland security, health, foreign relations, the environment, and the technological recovery and use of resources, among other topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of Management and Budget with an annual review and analysis of Federal research and development (R&D) in budgets, and serves as a source of scientific and technological analysis and judgment for the President with respect to major policies, plans, and programs of the Federal Government. More information is available at https://www.whitehouse.gov/ostp.

Groups that advise and assist the NSTC on AI include  

* The Select Committee on Artificial Intelligence (AI) addresses Federal AI R&D activities, including those related to autonomous systems, biometric identification, computer vision, human computer interactions, machine learning, natural language processing, and robotics. The committee supports policy on technical, national AI workforce issues.

* The Subcommittee on Machine Learning and Artificial Intelligence monitors the state of the art in machine learning (ML) and artificial intelligence within the Federal Government, in the private sector, and internationally.

* The Artificial Intelligence Research & Development Interagency Working Group coordinates Federal R&D in AI and supports and coordinates activities tasked by the Select Committee on AI and the NSTC Subcommittee on Machine Learning and Artificial Intelligence.

More information is available at https://www.nitrd.gov/groups/AI.

AI Regulation

With AI in the news so much over the past year, the public awareness of potential problems arising from the proliferation of AI systems and products has led to increasing calls for regulation. The popular media, and even technical media, contain misinformation and misplaced fears, but plenty of legitimate issues exist even if their relative importance is sometimes misunderstood. Policymakers, researchers, and developers need to be in dialog about the true needs and potential dangers of regulation.

Google top lawyer pushes back against one-size-fits-all rules for AI” by Janosch Delcker at POLITICO is an example of corporate reaction to the calls for regulation. “Understanding exactly the applications that we see for AI, and how those should be regulated, that’s an important next chapter,” Kent Walker, Google’s senior vice president for global affairs and the company’s chief legal officer, told POLITICO during a recent visit to Germany. “But you generally don’t want one-size-fits-all regulation, especially for a tool that is going to be used in a lot of different ways,” he added.

From our policy perspective, the significant risks from AI systems include misuse and faulty unsafe designs that can create bias, non-transparency of use, and loss of privacy.  AI systems are known to discriminate against minorities, unintentionally and not. An important discussion we should be having is if governments, international organizations, and big corporations, which have already released dozens of non-binding guidelines for the responsible development and use of AI, are the best entities for writing and enforcing regulations. Non-binding principles will not make some companies developing and applying AI products accountable. An important point in this regard is to hold companies responsible for the product design process itself, not just for testing products after they are in use.

Introduction of new government regulations is a long process and subject to pressure from lobbyists, and the current US administration is generally inclined against regulations anyway. We should discuss alternatives like clearinghouses and consumer groups endorsing AI products designed for safety and ethical use. If well publicized, the endorsements of respected non-partisan groups including professional societies might be more effective and timely than government regulations. The European Union has released its Ethics Guidelines for Trustworthy AI, and a second document with recommendations on how to boost investment in Europe’s AI industry is to be published. In May, 2019, the Organization for Economic Cooperation and Development (OECD) issued their first set of international OECD Principles on Artificial Intelligence, which are embraced by the United State and leading AI companies.

Events and Announcements

AAAI Policy Initiative

AAAI has established a new mailing list on US Policy that will focus exclusively on the discussion of US policy matters related to artificial intelligence. All members and affiliates are invited to join the list at https://aaai.org/Organization/mailing-lists.php

Participants will have the opportunity to subscribe or unsubscribe at any time. The mailing list will be moderated, and all posts will be approved before dissemination. This is a great opportunity for another productive partnership between AAAI and SIGAI policy work.

EPIC Panel on June 5th

A panel on AI, Human Rights, and US policy, will be hosted by the Electronic Privacy Information Center (EPIC) at their annual meeting (and celebration of 25th anniversary) on June 5, 2019, at the National Press Club in DC.  Our Lorraine Kisselburgh will join Harry Lewis (Harvard), Sherry Turkle (MIT), Lynne Parker (UTenn and White House OSTP director for AI), Sarah Box (OECD), and Bilyana Petkova (EPIC and Maastricht) to discuss AI policy directions for the US.  The event is free and open to the public. You can register at https://epic.org/events/June5AIpanel/

2019 ACM SIGAI Election Reminder

Please remember to vote and to review the information on http://www.acm.org/elections/sigs/voting-page. Please note that 16:00 UTC, 14 June 2019 is the deadline for submitting your vote. To access the secure voting site, you will enter your email address (the one associated with your ACM/SIG member record) to reach the menu of active SIG elections for which you are eligible. In the online menu, select your Special Interest Group and enter the 10-digit Unique Pin.

AI Research Roadmap

The Computing Community Consortium (CCC) is requesting comments on the draft of A 20-Year Community Roadmap for AI Research in the US. Please submit your comments here by May 28, 2019. See the AI Roadmap Website for more information. 

Here is a link to the whole report and links to individual sections:

     Title Page, Executive Summary, and Table of Contents 

  1. Introduction
  2. Major Societal Drivers for Future Artificial Intelligence Research 
  3. Overview of Core Technical Areas of AI Research Roadmap: Workshop Reports 
    1. Workshop I: A Research Roadmap for Integrated Intelligence 
    2. Workshop II: A Research Roadmap for Meaningful Interaction 
    3. Workshop III: A Research Roadmap for Self-Aware Learning 
  4. Major Findings 
  5. Recommendations
  6. Conclusions

Appendices (participants and contributors)

AI Researchers Win Turing Award

We are pleased to announce that the recipients of the 2018 ACM A.M. Turing Award are AI researchers Yoshua Bengio, Professor at the University of Montreal and Scientific Director at Mila; Geoffrey Hinton, Professor at the University of Toronto and Chief Scientific Advisor at the Vector Institute; and Yann LeCun, Professor at New York University and Chief AI Scientist at Facebook.

Their citation reads as follows:

For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.

Bengio, Hinton, and LeCun will be presented with the Turing Award at the June 15, 2019 ACM Awards Banquet in San Francisco.

Please see https://awards.acm.org/about/2018-turing for more information.