Events and Announcements

AAAI Policy Initiative

AAAI has established a new mailing list on US Policy that will focus exclusively on the discussion of US policy matters related to artificial intelligence. All members and affiliates are invited to join the list at https://aaai.org/Organization/mailing-lists.php

Participants will have the opportunity to subscribe or unsubscribe at any time. The mailing list will be moderated, and all posts will be approved before dissemination. This is a great opportunity for another productive partnership between AAAI and SIGAI policy work.

EPIC Panel on June 5th

A panel on AI, Human Rights, and US policy, will be hosted by the Electronic Privacy Information Center (EPIC) at their annual meeting (and celebration of 25th anniversary) on June 5, 2019, at the National Press Club in DC.  Our Lorraine Kisselburgh will join Harry Lewis (Harvard), Sherry Turkle (MIT), Lynne Parker (UTenn and White House OSTP director for AI), Sarah Box (OECD), and Bilyana Petkova (EPIC and Maastricht) to discuss AI policy directions for the US.  The event is free and open to the public. You can register at https://epic.org/events/June5AIpanel/

2019 ACM SIGAI Election Reminder

Please remember to vote and to review the information on http://www.acm.org/elections/sigs/voting-page. Please note that 16:00 UTC, 14 June 2019 is the deadline for submitting your vote. To access the secure voting site, you will enter your email address (the one associated with your ACM/SIG member record) to reach the menu of active SIG elections for which you are eligible. In the online menu, select your Special Interest Group and enter the 10-digit Unique Pin.

AI Research Roadmap

The Computing Community Consortium (CCC) is requesting comments on the draft of A 20-Year Community Roadmap for AI Research in the US. Please submit your comments here by May 28, 2019. See the AI Roadmap Website for more information. 

Here is a link to the whole report and links to individual sections:

     Title Page, Executive Summary, and Table of Contents 

  1. Introduction
  2. Major Societal Drivers for Future Artificial Intelligence Research 
  3. Overview of Core Technical Areas of AI Research Roadmap: Workshop Reports 
    1. Workshop I: A Research Roadmap for Integrated Intelligence 
    2. Workshop II: A Research Roadmap for Meaningful Interaction 
    3. Workshop III: A Research Roadmap for Self-Aware Learning 
  4. Major Findings 
  5. Recommendations
  6. Conclusions

Appendices (participants and contributors)

AI Researchers Win Turing Award

We are pleased to announce that the recipients of the 2018 ACM A.M. Turing Award are AI researchers Yoshua Bengio, Professor at the University of Montreal and Scientific Director at Mila; Geoffrey Hinton, Professor at the University of Toronto and Chief Scientific Advisor at the Vector Institute; and Yann LeCun, Professor at New York University and Chief AI Scientist at Facebook.

Their citation reads as follows:

For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.

Bengio, Hinton, and LeCun will be presented with the Turing Award at the June 15, 2019 ACM Awards Banquet in San Francisco.

Please see https://awards.acm.org/about/2018-turing for more information.

New Jobs in the Future of Work

As employers increasingly adopt automation technology, many workforce analysts look to jobs and career paths in new disciplines, especially data science and applications of AI, to absorb workers who are displaced by automation. By some accounts, data science is in first place for technology career opportunities. Estimating current and near-term numbers of data scientists and AI professionals is difficult because of different job titles and position descriptions used by organizations and job recruiters. Likewise, many employees in positions with traditional titles have transitioned to data science and AI work. Better estimates, and at least upper limits, are necessary for evidence-based predictions of unemployment rates due to automation over the next decade. McKinsey&Company estimates 375 million jobs will be lost globally due to AI and other automation technologies by 2030, and one school of thought in today’s public discourse is that at least that number of new jobs will be created. An issue for the AI community and policy makers is the nature, quality, and number of the new jobs – and how many data science and AI technology jobs will contribute to meeting the shortfall.

An article in KDnuggets by Gregory Piatetsky points out that a “Search for data scientist (without quotes) finds about 30,000 jobs, but we are not sure how many of those jobs are for scientists in other areas … a person employed to analyze and interpret complex digital data, such as the usage statistics of a website, especially in order to assist a business in its decision-making … titles include Data Scientist, Data Analyst , Statistician, Bioinformatician, Neuroscientist, Marketing executive, Computer scientist, etc…”  Data on this issue could clarify the net number of future jobs in AI, data science, and related areas. Computer science had a similar history with the boom in the new field followed by migration of computing into many other disciplines. Another factor is that “long-term, however, automation will be replacing many jobs in the industry, and Data Scientist job will not be an exception.  Already today companies like DataRobot and H2O offer automated solutions to Data Science problems. Respondents to KDnuggets  2015 Poll expected that most expert-level Predictive Analytics/Data Science tasks will be automated by 2025.  To stay employed, Data Scientists should focus on developing skills that are harder to automate, like business understanding, explanation, and story telling.” This issue is also important in estimating the number of new jobs by 2030 for displaced workers.

Kiran Garimella in his Forbes article “Job Loss From AI? There’s More To Fear!examines the scenario of not enough new jobs to replace ones lost through automation. His interesting perspective turns to economists, sociologists, and insightful policymakers “to re-examine and re-formulate their models of human interaction and organization and … re-think incentives and agency relationships.”

OpenAI

A recent controversy erupted over OpenAI’s new version of their language model for generating well-written next words of text based on unsupervised analysis of large samples of writing. Their announcement and decision not to follow open-source practices raises interesting policy issues about regulation and self-regulation of AI products. OpenAI, a non-profit AI research company founded by Elon Musk and others, announced on February 14, 2019, that “We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.”

The reactions to the announcement followed from the decision behind the following statement in the release: “Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.

Examples of the many reactions are TechCrunch.com and Wired. The Electronic Frontier Foundation has an analysis of the manner of the release (letting journalists know first) and concludes, “when an otherwise respected research entity like OpenAI makes a unilateral decision to go against the trend of full release, it endangers the open publication norms that currently prevail in language understanding research.”

This issue is an example of previous ideas in our Public Policy blog about who, if anyone, should regulate AI developments and products that have potential negative impacts on society. Do we rely on self-regulation or require governmental regulations? What if the U.S. has regulations and other countries do not? Would a clearinghouse approach put profit-based pressure on developers and corporations? Can the open source movement be successful without regulatory assistance?

Call for Nominations

Editor-In-Chief ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)

The term of the current Editor-in-Chief (EiC) of the ACM Trans. on Asian and Low-Resource Language Information Processing (TALLIP) is coming to an end, and the ACM Publications Board has set up a nominating committee to assist the Board in selecting the next EiC.  TALLIP was established in 2002 and has been experiencing steady growth, with 178 submissions received in 2017.

Nominations, including self nominations, are invited for a three-year term as TALLIP EiC, beginning on June 1, 2019.  The EiC appointment may be renewed at most one time. This is an entirely voluntary position, but ACM will provide appropriate administrative support.

Appointed by the ACM Publications Board, Editors-in-Chief (EiCs) of ACM journals are delegated full responsibility for the editorial management of the journal consistent with the journal’s charter and general ACM policies. The Board relies on EiCs to ensure that the content of the journal is of high quality and that the editorial review process is both timely and fair. He/she has final say on acceptance of papers, size of the Editorial Board, and appointment of Associate Editors. A complete list of responsibilities is found in the ACM Volunteer Editors Position Descriptions. Additional information can be found in the following documents:

Nominations should include a vita along with a brief statement of why the nominee should be considered. Self-nominations are encouraged, and should include a statement of the candidate’s vision for the future development of TALLIP. The deadline for submitting nominations is April 15, 2019, although nominations will continue to be accepted until the position is filled.

Please send all nominations to the nominating committee chair, Monojit Choudhury (monojitc@microsoft.com).

The search committee members are:

  • Monojit Choudhury (Microsoft Research, India), Chair
  • Kareem M. Darwish (Qatar Computing Research Institute, HBKU)
  • Tei-wei Kuo (National Taiwan University & Academia Sinica) EiC of ACM Transactions on Cyber-Physical Systems; Vice Chair, ACM SIGAPP
  • Helen Meng, (Chinese University of Hong Kong)
  • Taro Watanabe (Google Inc., Tokyo)
  • Holly Rushmeier (Yale University), ACM Publications Board Liaison

AI Hype Not

A recent item in Science|Business “Artificial intelligence nowhere near the real thing, says German AI chief”, by Éanna Kelly, gives policy-worthy warnings and ideas. “In his 20 years as head of Germany’s biggest AI research lab Wolfgang Wahlster has seen the tech hype machine splutter three times. As he hands over to a new CEO, he warns colleagues: ‘Don’t over-promise’.the computer scientist who has just ended a 20 year stint as CEO of the German Research Centre for Artificial Intelligence says that [warning] greatly underestimates the distance between AI and its human counterpart: ‘We’re years away from a game changer in the field. I always warn people, one should be a bit careful with what they claim. Every day you work on AI, you see the big gap between human intelligence and AI’, Wahlster told Science|Business.”

For AI policy, we should remember to look out for over promising, but we also need to be mindful of the time frame for making effective policy and be fully engaged now. Our effort importantly informs policymakers about the real opportunities to make AI successful.  A recent article in The Conversation by Ben Shneiderman “What alchemy and astrology can teach artificial intelligence researchers,” gives insightful information and advice on how to avoid being distracted away “… from where the real progress is already happening: in systems that enhance – rather than replace – human capabilities.” Shneiderman recommends that technology designers shift “from trying to replace or simulate human behavior in machines to building wildly successful applications that people love to use.”

AAII and EAAI

President Trump issued an Executive Order on February 11, 2019, entitled “Maintaining American Leadership In Artificial Intelligence”. The full text is at https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/The American AI Initiative of course needs analysis and implementation details. Two sections of the Executive Order give hope for opportunities to provide public input:

Sec (5)(a)(1)(i): Within 90 days of the date of this order, the OMB Director shall publish a notice in the Federal Register inviting the public to identify additional requests for access or quality improvements for Federal data and models that would improve AI R&D and testing. …[T]hese actions by OMB will help to identify datasets that will facilitate non-Federal AI R&D and testing.
and
Sec (6)(b)
(b)  To help ensure public trust in the development and implementation of AI applications, OMB shall issue a draft version of the memorandum for public comment before it is finalized.
Please stay tuned for ways that our ACM US Technology Policy Committee (USTPC) can help us provide our feedback on the implementation of the Executive Order.

A summary and analysis report is available from the Center for Data Innovation: Executive Order Will Help Ensure U.S. Leadership in AI. They comment that the administration “needs to do more than reprogram existing funds for AI research, skill development, and infrastructure development” and “should ask Congress for significant funding increases to (a) expand these research efforts;
(b) implement light-touch regulation for AI;
(c) resist calls to implement roadblocks or speed bumps for this technology, including export restrictions;
(d) rapidly expand adoption of AI within government,
implement comprehensive reforms to the nation’s workforce training and adjustment policies.”

The latter point was a topic in my invited talk at EAAI-19. Opportunities and innovation in education and training for the workforce of the future rely crucially on public policymaking about workers in the era of increasing use of AI and other automation technologies. An important issue is who will provide training that is timely (by 2030), practical, and affordable for workers who are impacted by job disruptions and transitioning to the new predicted post-automation jobs. The stakeholders along with workers are the schools, employers, unions, community groups, and others. Even if more jobs are created than lost, work in the AI future is disproportionately available in the range of people in the current and near-future workforce.

Section 1 of the Executive Order “Maintaining American Leadership In Artificial Intelligence” follows:
Section 1.  Policy and Principles.Artificial Intelligence (AI) promises to drive growth of the United States economy, enhance our economic and national security, and improve our quality of life. The United States is the world leader in AI research and development (R&D) and deployment.  Continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with our Nation’s values, policies, and priorities.  The Federal Government plays an important role in facilitating AI R&D, promoting the trust of the American people in the development and deployment of AI-related technologies, training a workforce capable of using AI in their occupations, and protecting the American AI technology base from attempted acquisition by strategic competitors and adversarial nations.  Maintaining American leadership in AI requires a concerted effort to promote advancements in technology and innovation, while protecting American technology, economic and national security, civil liberties, privacy, and American values and enhancing international and industry collaboration with foreign partners and allies.  It is the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy, the American AI Initiative (Initiative), guided by five principles:
(a)  The United States must drive technological breakthroughs in AI across the Federal Government, industry, and academia in order to promote scientific discovery, economic competitiveness, and national security.
(b)  The United States must drive development of appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by today’s industries.
(c)  The United States must train current and future generations of American workers with the skills to develop and apply AI technologies to prepare them for today’s economy and jobs of the future.
(d)  The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.
(e)  The United States must promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting our technological advantage in AI and protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.

Interview with Iolanda Leite

Introduction

This column is the seventh in our series pro- filing senior AI researchers. This month we are happy to interview Iolanda Leite, Assistant Professor at the School of Computer Science and Electrical Engineering at the KTH Royal Institute of Technology in Sweden. This is a great opportunity to get to know Iolanda, the new AI Matters co-editor in chief. Welcome on board!

Biography

Iolanda Leite is an Assistant Professor at the School of Computer Science and Electri- cal Engineering at the KTH Royal Institute of Technology in Sweden. She holds a PhD in Information Systems and Computer Engineer- ing from IST, University of Lisbon. Prior to join- ing KTH, she was a Research Assistant at the Intelligent Agents and Synthetic Characters Group at INESC-ID Lisbon, a Postdoctoral As- sociate at the Yale Social Robotics Lab and an Associate Research Scientist at Disney Re- search Pittsburgh. Iolanda’s research inter- ests are in the areas of Human-Robot Inter- action and Artificial Intelligence. She aims to develop autonomous socially intelligent robots that can assist people over long periods of time.

Getting to Know Iolanda Leite

When and how did you become interested in CS and AI?

I became interested in CS at the age of 4 when the first computer arrived at our home. It is more difficult to establish a time to define my interest in AI. I was born in the 80s and have always been fascinated by toys that had some level of “intelligence” or “life-likeness” like the Tamagotchi or the Furby robots. During my Master’s degree, I chose the Intelligent Sys- tems specialization. That time was probably when I seriously considered a research career in this area.

What professional achievement are you most proud of?

Seeing my students accomplish great things on their own.

What would you have chosen as your career if you hadn’t gone into CS?

I always loved to work with children so maybe something related to child education.

What do you wish you had known as a Ph.D. student or early researcher?

As an early researcher I often had a hard time dealing with the rejection of papers, applica- tions, etc. What I wish the “past me” could know is that if one keeps working hard, things will eventually work out well in the end. In other words, keeping faith in the system.

What is the most interesting project you are currently involved with?

All of them! If I have to highlight one, we are working with elementary schools that have classes of newly arrived children in a project where we are using social robots to promote inclusion between newly arrived and local chil- dren. This is part of an early career fellowship awarded by the Jacobs Foundation.

We currently observe many promising and exciting advances in using AI in education, going beyond automating Piazza answering, how should we make use of AI to teach AI?

I believe that AI can be used to complement teachers and provide personalized instruction to students of all ages and in a variety of top- ics. Robotic tutors can play an important role in education because the mere physical pres- ence of a robot has shown to have a positive impact on how much information students can recall, for example when compared to a virtual agent displayed in a computer screen deliver- ing the exact same content.

How can we make AI more diverse? Do you have a concrete idea on what we as (PhD) students, researchers, and educators in AI can do to increase diversity our field?

Something we can all do is to participate in outreach initiatives targeting groups underrep- resented in AI to show them that there is space for them in the community. If we start bottom-up, in the long-term I am positive that our community will be more diverse at all lev- els and the bias in opportunities, recruiting, etc. will go away.

What was your most difficult professional decision and why?

Leaving my home country (Portugal) after fin- ishing my PhD to continue my research career because I miss my family and friends, and also the good weather!

How do you balance being involved in so many different aspects of the AI community?

I love what I do and I currently don’t have any hobbies 🙂

AI is grown up – it’s time to make use of it for good. Which real-world problem would you like to see solved by AI in the future?

If AI could fully address any of the Sustainable Development Goals established by the United Nations, it would be (more than) great. Al- though there are excellent research and fund- ing initiatives in that direction, we are still not there yet.

What is your favorite AI-related movie or book and why?

One of my favorite ones recently was the Westworld TV Series because of the power relationships between the human and the robotic characters. I find it hard to believe that humans will treat robots the way they are treated in the series, but it makes me reflect on how our future interactions with technol- ogy that is becoming more personalized and “human-like” might look like.

Autonomous Vehicles: Policy and Technology

In 2018, we discussed language that aims at safety and degrees of autonomy rather than having, possibly unattainable, goals of completely autonomous things. A better approach, at least for the next 5-10 years, is to seek the correct balance between technology and humans in hybrid devices and systems. See for example, the Unmanned Integrated Systems Roadmap, 2017-2042 and Ethically Aligned Design. We also need to consider the limits and possibilities for research on the technologies and their impacts on time frames and the proper focus of policymaking.

In a recent interview, Dr. Harold Szu, a co-founder and former governor of the International Neural Network Society, discusses research ideas that better mimic human thinking and that could dramatically reduce the time to develop autonomous technology. He discusses a possible new level of brain-style computing that incorporates fuzzy membership functions into autonomous control systems. Autonomous technology incorporating human characteristics, along with safe policies and earlier arrival of brain-style technologies, could usher in the next big economic boom. For more details, view the Harold Szu interview.

Discussion Issues for 2019

FaceBook, Face Recognition, Autonomous Things, and the Future of Work

Four focus areas of discussions at the end of 2018 are the initial topics for the SIGAI Policy Blog as we start 2019.  The following, with links to resources, are important ongoing subjects for our Policy blogsite in the new year:

FaceBook continues to draw attention to the general issue of data privacy and the role of personal data in business models. Here are some good resources to check:
NY Times on FaceBook Privacy
FaceBook Partners
Spotify
Netflix

Facial recognition software is known to be flawed, having side effects of bias, unwanted surveillance, and other problems. The Safe Face Pledge, developed by the Algorithmic Justice League and Georgetown University Law Center of Privacy & Technology, is an example of emerging efforts to make organizations aware of problems with facial recognition products, for example in autonomous weapons systems and law enforcement agencies. The Safe Face Pledge asks that companies commit to safety in business practices and promote public policy for broad regulation and government oversight on facial recognition applications.

“Autonomous” Things: Degrees of Separation: The R&D for “autonomous” vehicles and other devices that dominate our daily lives pose challenges for technologies as well as for ethics and policy considerations. In 2018, we discussed language that aims at safety and degrees of autonomy rather than having, possibly unattainable, goals of completely autonomous things. A better approach may be to seek the correct balance between technology and humans in hybrid devices and systems. See for example, the Unmanned Integrated Systems Roadmap, 2017-2042 and Ethically Aligned Design.

The Future of Work and Education is a topic that not only tries to predict the workforce of the future, but also how society needs to prepare for it. Many experts believe that our current school systems are not up to the challenge and that industry and government programs are needed for the challenges emerging in just a few years. See, for example, writing by the Ford Foundation and the World Economic Forum.

We welcome your feedback and discussions as we enter the 2019 world of AI and policy!