Work Transition

AI and other automation technologies have great promise for benefitting society and enhancing productivity, but appropriate policies by companies and governments are needed to help manage workforce transitions and make them as smooth as possible. The McKinsey Global Institute report AI, automation, and the future of work: Ten things to solve for states that “There is work for everyone today and there will be work for everyone tomorrow, even in a future with automation. Yet that work will be different, requiring new skills, and a far greater adaptability of the workforce than we have seen. Training and retraining both mid-career workers and new generations for the coming challenges will be an imperative. Government, private-sector leaders, and innovators all need to work together to better coordinate public and private initiatives, including creating the right incentives to invest more in human capital. The future with automation and AI will be challenging, but a much richer one if we harness the technologies with aplomb—and mitigate the negative effects.” They list likely actionable and scalable solutions in several key areas:

1. Ensuring robust economic and productivity growth

2. Fostering business dynamism

3. Evolving education systems and learning for a changed workplace

4. Investing in human capital

5. Improving labor-market dynamism

6. Redesigning work

7. Rethinking incomes

8. Rethinking transition support and safety nets for workers affected

9. Investing in drivers of demand for work

10. Embracing AI and automation safely

In redesigning work and rethinking incomes, we have the chance for bold ideas that improve the lives of workers and give them more interesting jobs that could provide meaning, purpose, and dignity.

Some of the categories of new jobs that could replace old jobs are
1. Making, designing, and coding in AI, data science, and engineering occupations
2. Working in new types of non-AI jobs that are enhanced by AI, making unpleasant old jobs more palatable or providing new jobs that are more interesting; the gig economy and crowdsourcing ideas are examples that could provide creative employment options
3. Providing living wages for people to do things they love; for example, in the arts through dramatic funding increases for NEA and NEH programs. Grants to individual artists and musicians, professional and amateur musical organizations, and informal arts education initiatives could enrich communities while providing income for millions of people. Policies to implement this idea could be one piece of the future-of-work puzzle and be much more preferable for the economy and society than allowing large-scale unemployment due to loss of work from automation.

National AI Strategy

The National Artificial Intelligence Research and Development Strategic Plan – an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council – was released in June, 2019, and the President’s, Executive Order 13859 Maintaining American Leadership in Artificial Intelligence was released on February 11. The Computing Community Consortium (CCC) recently released the AI Roadmap Website, and an interesting industry response is “Intel Gets Specific on a National Strategy for AI, “How to Propel the US into a Sustainable Leadership Position on the Global Artificial Intelligence Stage” By Naveen Rao and David Hoffman. Excerpts follow and the accompanying links provide the details:

“AI is more than a matter of making good technology; it is also a matter of making good policy. And that’s what a robust national AI strategy will do: continue to unlock the potential of AI, prepare for AI’s many ramifications, and keep the U.S. among leading AI countries. At least 20 other countries have published, and often funded, their national AI strategies. Last month, the administration signaled its commitment to U.S. leadership in AI by issuing an executive order to launch the American AI Initiative, focusing federal government resources to develop AI. Now it’s time to take the next step and bring industry and government together to develop a fully realized U.S. national strategy to continue leading AI innovation.

“… But to sustain leadership and effectively manage the broad social implications of AI, the U.S. needs coordination across government, academia, industry and civil society. This challenge is too big for silos, and it requires that technologists and policymakers work together and understand each other’s worlds.” Their call to action was released in May 2018.

Four Key Pillars

“Our recommendation for a national AI strategy lays out four key responsibilities for government. Within each of these areas we propose actionable steps. We provide some highlights here, and we encourage you to read the full white paper or scan the shorter fact sheet.

Sustainable and funded government AI research and development can help to advance the capabilities of AI in areas such as healthcare, cybersecurity, national security and education, but there need to be clear ethical guidelines.

Create new employment opportunities and protect people’s welfare given that AI has the potential to automate certain work activities.

Liberate and share data responsibly, as the more data that is available, the more “intelligent” an AI system can become. But we need guardrails.

Remove barriers and create a legal and policy environment that supports AI so that the responsible development and use of AI is not inadvertently derailed.”

AI Race Matters

China, the European Union, and the United States have been in the news about strategic plans and policies on the future of AI. The July 2 AI Matters policy blog post was on the U.S. National Artificial Intelligence Research and Development Strategic Plan, released in June, as an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council. The Computing Community Consortium (CCC) recently released the AI Roadmap Website.
Now, a Center for Data Innovation Report compares the current standings of China, the European Union, and the United States and makes policy recommendations. Here is the report summary: “Many nations are racing to achieve a global innovation advantage in artificial intelligence (AI) because they understand that AI is a foundational technology that can boost competitiveness, increase productivity, protect national security, and help solve societal challenges. This report compares China, the European Union, and the United States in terms of their relative standing in the AI economy by examining six categories of metrics—talent, research, development, adoption, data, and hardware. It finds that despite China’s bold AI initiative, the United States still leads in absolute terms. China comes in second, and the European Union lags further behind. This order could change in coming years as China appears to be making more rapid progress than either the United States or the European Union. Nonetheless, when controlling for the size of the labor force in the three regions, the current U.S. lead becomes even larger, while China drops to third place, behind the European Union. This report also offers a range of policy recommendations to help each nation or region improve its AI capabilities.”

About Face

Face recognition R&D has made great progress in recent years and has been prominent in the news. In public policy many are calling for a reversal of the trajectory for FR systems and products. In the hands of people of good will – using products designed for safety and training systems with appropriate data – society and individuals could have a better life. The Verge reports China’s use of unique facial markings of pandas to identify individual animals. FR research includes work to mitigate negative outcomes, as with the Adobe and UC Berkeley work on Detecting Facial Manipulations in Adobe Photoshop: automatic detect when images of faces have been manipulated by use of splicing, cloning, and removing an object.

Intentional and unintentional application of systems that are not designed and trained for ethical use are a threat to society. Screening for terrorists could be good, but FR lie and fraud detection systems may not work properly. The safety of FR is currently an important issue for policymakers, but regulations could have negative consequences for AI researchers. As with many contemporary issues, conflicts arise because of conflicting policies in different countries.

Recent and current legislation is attempting to restrict FR the use and possibly research.
* San Francisco, CA and Somerville, MA, and Oakland, CA, are the first three cities to limit use of FR to identify people.
* “Facial recognition may be banned from public housing thanks to proposed law” – CNET reports that a bill will be introduced to address the issue that “… landlords across the country continue to install smart home technology and tenants worry about unchecked surveillance, there’s been growing concern about facial recognition arriving at people’s doorsteps.”
* The major social media companies are being pressed on “how they plan to handle the threat of deepfake images and videos on their platforms ahead of the 2020 elections.”
* A call for a more comprehensive ban on FR has been launched by the digital rights group Fight for the Future, seeking a complete Federal ban on government use of facial recognition surveillance.

Beyond legislation against FR research and banning certain products, work is in progress to enable safe and ethical use of FR. A more general example that could be applied to FR is the MITRE work The Ethical Framework for the Use of Consumer-Generated Data in Health Care, which “establishes ethical values, principles, and guidelines to guide the use of Consumer-Generated Data for health care purposes.”

US and G20 AI Policy

The past few weeks have been busy with government events and announcements on AI Policy.

The G20 on AI

Ministers from the Group of 20 major economies conducted meetings on trade and the digital economy. They produced guiding principles for using artificial intelligence based on principles adopted last month by the 36-member OECD and an additional six countries. The G20 guidelines call for users and developers of AI to be fair and accountable, with transparent decision-making processes and to respect the rule of law and values including privacy, equality, diversity and internationally recognized labor rights. Meanwhile, the principles also urge governments to ensure a fair transition for workers through training programs and access to new job opportunities.

Bipartisan Group of Legislators Act on “Deepfake” Videos

The senators introduced legislation Friday intended to lessen the threat posed by “deepfake” videos — those created with AI technologies to manipulate original videos and produce misleading information. With this legislation, the Department of Homeland Security would conduct an annual study of deepfakes and related content and require the department to assess the AI technologies used to create deepfakes. This could lead to changes to regulations or new regulations impacting the use of AI.

Hearing on Societal and Ethical Implications of AI

The House Science, Space and Technology Committee held a hearing.  June 26th  on the societal and ethical implications of artificial intelligence, now available on video.

The National Artificial Intelligence Research and Development Strategic Plan, released in June, is an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council.

On February 11, 2019, the President signed Executive Order 13859 Maintaining American Leadership in Artificial Intelligence. According to Michael Kratsios, Deputy Assistant to the President for Technology Policy, this order “launched the American AI Initiative, which is a concerted effort to promote and protect AI technology and innovation in the United States. The Initiative implements a whole-of-government strategy in collaboration and engagement with the private sector, academia, the public, and likeminded international partners. Among other actions, key directives in the Initiative call for Federal agencies to prioritize AI research and development (R&D) investments, enhance access to high-quality cyberinfrastructure and data, ensure that the Nation leads in the development of technical standards for AI, and provide education and training opportunities to prepare the American workforce for the new era of AI.

“The first seven strategies continue from the 2016 Plan, reflecting the reaffirmation of the importance of these strategies by multiple respondents from the public and government, with no calls to remove any of the strategies. The eighth strategy is new and focuses on the increasing importance of effective partnerships between the Federal Government and academia, industry, other non-Federal entities, and international allies to generate technological breakthroughs in AI and to rapidly transition those breakthroughs into capabilities.”

Strategy 8: Expand Public–Private Partnerships to Accelerate Advances in AI is new in the June, 2019, plan and “reflects the growing importance of public-private partnerships enabling AI R&D an expands public-private partnerships to accelerate advances in AI. Promote opportunities for sustained investment in AI R&D and for transitioning advances into practical capabilities, in collaboration with academia, industry, international partners, and other non-Federal entities.”

Continued points from the seven Strategies in the previous Executive Order in February include

1. support for the development of instructional materials and teacher professional development in computer science at all levels, with emphasis at the K–12 levels

2. consideration of AI as a priority area within existing Federal fellowship and service programs

3. development AI techniques for human augmentation

4. emphasis on achieving trust: AI system designers need to create accurate, reliable systems with informative, user-friendly interfaces.

The National Science and Technology Council (NSTC) is functioning again. NSTC is the principal means by which the Executive Branch coordinates science and technology policy across the diverse entities that make up the Federal research and development enterprise. A primary objective of the NSTC is to ensure that science and technology policy decisions and programs are consistent with the President’s stated goals. The NSTC prepares research and development strategies that are coordinated across Federal agencies aimed at accomplishing multiple national goals. The work of the NSTC is organized under committees that oversee subcommittees and working groups focused on different aspects of science and technology. More information is available at https://www.whitehouse.gov/ostp/nstc.

The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology Policy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office of the President with advice on the scientific, engineering, and technological aspects of the economy, national security, homeland security, health, foreign relations, the environment, and the technological recovery and use of resources, among other topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of Management and Budget with an annual review and analysis of Federal research and development (R&D) in budgets, and serves as a source of scientific and technological analysis and judgment for the President with respect to major policies, plans, and programs of the Federal Government. More information is available at https://www.whitehouse.gov/ostp.

Groups that advise and assist the NSTC on AI include  

* The Select Committee on Artificial Intelligence (AI) addresses Federal AI R&D activities, including those related to autonomous systems, biometric identification, computer vision, human computer interactions, machine learning, natural language processing, and robotics. The committee supports policy on technical, national AI workforce issues.

* The Subcommittee on Machine Learning and Artificial Intelligence monitors the state of the art in machine learning (ML) and artificial intelligence within the Federal Government, in the private sector, and internationally.

* The Artificial Intelligence Research & Development Interagency Working Group coordinates Federal R&D in AI and supports and coordinates activities tasked by the Select Committee on AI and the NSTC Subcommittee on Machine Learning and Artificial Intelligence.

More information is available at https://www.nitrd.gov/groups/AI.

AI Regulation

With AI in the news so much over the past year, the public awareness of potential problems arising from the proliferation of AI systems and products has led to increasing calls for regulation. The popular media, and even technical media, contain misinformation and misplaced fears, but plenty of legitimate issues exist even if their relative importance is sometimes misunderstood. Policymakers, researchers, and developers need to be in dialog about the true needs and potential dangers of regulation.

Google top lawyer pushes back against one-size-fits-all rules for AI” by Janosch Delcker at POLITICO is an example of corporate reaction to the calls for regulation. “Understanding exactly the applications that we see for AI, and how those should be regulated, that’s an important next chapter,” Kent Walker, Google’s senior vice president for global affairs and the company’s chief legal officer, told POLITICO during a recent visit to Germany. “But you generally don’t want one-size-fits-all regulation, especially for a tool that is going to be used in a lot of different ways,” he added.

From our policy perspective, the significant risks from AI systems include misuse and faulty unsafe designs that can create bias, non-transparency of use, and loss of privacy.  AI systems are known to discriminate against minorities, unintentionally and not. An important discussion we should be having is if governments, international organizations, and big corporations, which have already released dozens of non-binding guidelines for the responsible development and use of AI, are the best entities for writing and enforcing regulations. Non-binding principles will not make some companies developing and applying AI products accountable. An important point in this regard is to hold companies responsible for the product design process itself, not just for testing products after they are in use.

Introduction of new government regulations is a long process and subject to pressure from lobbyists, and the current US administration is generally inclined against regulations anyway. We should discuss alternatives like clearinghouses and consumer groups endorsing AI products designed for safety and ethical use. If well publicized, the endorsements of respected non-partisan groups including professional societies might be more effective and timely than government regulations. The European Union has released its Ethics Guidelines for Trustworthy AI, and a second document with recommendations on how to boost investment in Europe’s AI industry is to be published. In May, 2019, the Organization for Economic Cooperation and Development (OECD) issued their first set of international OECD Principles on Artificial Intelligence, which are embraced by the United State and leading AI companies.

Events and Announcements

AAAI Policy Initiative

AAAI has established a new mailing list on US Policy that will focus exclusively on the discussion of US policy matters related to artificial intelligence. All members and affiliates are invited to join the list at https://aaai.org/Organization/mailing-lists.php

Participants will have the opportunity to subscribe or unsubscribe at any time. The mailing list will be moderated, and all posts will be approved before dissemination. This is a great opportunity for another productive partnership between AAAI and SIGAI policy work.

EPIC Panel on June 5th

A panel on AI, Human Rights, and US policy, will be hosted by the Electronic Privacy Information Center (EPIC) at their annual meeting (and celebration of 25th anniversary) on June 5, 2019, at the National Press Club in DC.  Our Lorraine Kisselburgh will join Harry Lewis (Harvard), Sherry Turkle (MIT), Lynne Parker (UTenn and White House OSTP director for AI), Sarah Box (OECD), and Bilyana Petkova (EPIC and Maastricht) to discuss AI policy directions for the US.  The event is free and open to the public. You can register at https://epic.org/events/June5AIpanel/

2019 ACM SIGAI Election Reminder

Please remember to vote and to review the information on http://www.acm.org/elections/sigs/voting-page. Please note that 16:00 UTC, 14 June 2019 is the deadline for submitting your vote. To access the secure voting site, you will enter your email address (the one associated with your ACM/SIG member record) to reach the menu of active SIG elections for which you are eligible. In the online menu, select your Special Interest Group and enter the 10-digit Unique Pin.

AI Research Roadmap

The Computing Community Consortium (CCC) is requesting comments on the draft of A 20-Year Community Roadmap for AI Research in the US. Please submit your comments here by May 28, 2019. See the AI Roadmap Website for more information. 

Here is a link to the whole report and links to individual sections:

     Title Page, Executive Summary, and Table of Contents 

  1. Introduction
  2. Major Societal Drivers for Future Artificial Intelligence Research 
  3. Overview of Core Technical Areas of AI Research Roadmap: Workshop Reports 
    1. Workshop I: A Research Roadmap for Integrated Intelligence 
    2. Workshop II: A Research Roadmap for Meaningful Interaction 
    3. Workshop III: A Research Roadmap for Self-Aware Learning 
  4. Major Findings 
  5. Recommendations
  6. Conclusions

Appendices (participants and contributors)

New Jobs in the Future of Work

As employers increasingly adopt automation technology, many workforce analysts look to jobs and career paths in new disciplines, especially data science and applications of AI, to absorb workers who are displaced by automation. By some accounts, data science is in first place for technology career opportunities. Estimating current and near-term numbers of data scientists and AI professionals is difficult because of different job titles and position descriptions used by organizations and job recruiters. Likewise, many employees in positions with traditional titles have transitioned to data science and AI work. Better estimates, and at least upper limits, are necessary for evidence-based predictions of unemployment rates due to automation over the next decade. McKinsey&Company estimates 375 million jobs will be lost globally due to AI and other automation technologies by 2030, and one school of thought in today’s public discourse is that at least that number of new jobs will be created. An issue for the AI community and policy makers is the nature, quality, and number of the new jobs – and how many data science and AI technology jobs will contribute to meeting the shortfall.

An article in KDnuggets by Gregory Piatetsky points out that a “Search for data scientist (without quotes) finds about 30,000 jobs, but we are not sure how many of those jobs are for scientists in other areas … a person employed to analyze and interpret complex digital data, such as the usage statistics of a website, especially in order to assist a business in its decision-making … titles include Data Scientist, Data Analyst , Statistician, Bioinformatician, Neuroscientist, Marketing executive, Computer scientist, etc…”  Data on this issue could clarify the net number of future jobs in AI, data science, and related areas. Computer science had a similar history with the boom in the new field followed by migration of computing into many other disciplines. Another factor is that “long-term, however, automation will be replacing many jobs in the industry, and Data Scientist job will not be an exception.  Already today companies like DataRobot and H2O offer automated solutions to Data Science problems. Respondents to KDnuggets  2015 Poll expected that most expert-level Predictive Analytics/Data Science tasks will be automated by 2025.  To stay employed, Data Scientists should focus on developing skills that are harder to automate, like business understanding, explanation, and story telling.” This issue is also important in estimating the number of new jobs by 2030 for displaced workers.

Kiran Garimella in his Forbes article “Job Loss From AI? There’s More To Fear!examines the scenario of not enough new jobs to replace ones lost through automation. His interesting perspective turns to economists, sociologists, and insightful policymakers “to re-examine and re-formulate their models of human interaction and organization and … re-think incentives and agency relationships.”

OpenAI

A recent controversy erupted over OpenAI’s new version of their language model for generating well-written next words of text based on unsupervised analysis of large samples of writing. Their announcement and decision not to follow open-source practices raises interesting policy issues about regulation and self-regulation of AI products. OpenAI, a non-profit AI research company founded by Elon Musk and others, announced on February 14, 2019, that “We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.”

The reactions to the announcement followed from the decision behind the following statement in the release: “Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

Examples of the many reactions are TechCrunch.com and Wired. The Electronic Frontier Foundation has an analysis of the manner of the release (letting journalists know first) and concludes, “when an otherwise respected research entity like OpenAI makes a unilateral decision to go against the trend of full release, it endangers the open publication norms that currently prevail in language understanding research.”

This issue is an example of previous ideas in our Public Policy blog about who, if anyone, should regulate AI developments and products that have potential negative impacts on society. Do we rely on self-regulation or require governmental regulations? What if the U.S. has regulations and other countries do not? Would a clearinghouse approach put profit-based pressure on developers and corporations? Can the open source movement be successful without regulatory assistance?

AI Hype Not

A recent item in Science|Business “Artificial intelligence nowhere near the real thing, says German AI chief”, by Éanna Kelly, gives policy-worthy warnings and ideas. “In his 20 years as head of Germany’s biggest AI research lab Wolfgang Wahlster has seen the tech hype machine splutter three times. As he hands over to a new CEO, he warns colleagues: ‘Don’t over-promise’.the computer scientist who has just ended a 20 year stint as CEO of the German Research Centre for Artificial Intelligence says that [warning] greatly underestimates the distance between AI and its human counterpart: ‘We’re years away from a game changer in the field. I always warn people, one should be a bit careful with what they claim. Every day you work on AI, you see the big gap between human intelligence and AI’, Wahlster told Science|Business.”

For AI policy, we should remember to look out for over promising, but we also need to be mindful of the time frame for making effective policy and be fully engaged now. Our effort importantly informs policymakers about the real opportunities to make AI successful.  A recent article in The Conversation by Ben Shneiderman “What alchemy and astrology can teach artificial intelligence researchers,” gives insightful information and advice on how to avoid being distracted away “… from where the real progress is already happening: in systems that enhance – rather than replace – human capabilities.” Shneiderman recommends that technology designers shift “from trying to replace or simulate human behavior in machines to building wildly successful applications that people love to use.”