ACM Special Interest Group on Artificial Intelligence

We promote and support the growth and application of AI principles and techniques throughout computing

AI Matters: our blog

Policy in the News

The Computing Community Consortium (CCC) announced a new initiative to create a Roadmap for Artificial Intelligence. SIGAI’s Yolanda Gil (University of Southern California and President-Elect of AAAI) will work with Bart Selman (Cornell University) to lead the effort. The initiative will support the U.S. Administrations’ efforts in this area and involve academic and industrial researchers to help map a course for needed research in AI. They will hold a series of workshops in 2018 and 2019 to produce the Roadmap by Spring of 2019. The Computing Research Association (CRA) has been involved in shaping public policy of relevance to computing research for more than two decades https://cra.org/govaffairs/blog/ The CRA Government Affairs program has enhanced its efforts to help the members of the computing research community contribute to the public debate knowledgeably and effectively.

Ed Felten, Princeton Professor of Computer Science and Public Affairs, has been confirmed by the U.S. Senate to be a member of the U.S. Privacy and Civil Liberties Oversight Board, a bipartisan agency within the executive branch. He will serve as a part-time member of the board while continuing his teaching and research at Princeton. The five-person board is charged with evaluating and advising on executive branch anti-terrorism measures with respect to privacy and civil liberties. “It is a very important issue,” Felten said. “Federal agencies, in the course of doing national security work, have access to a lot of data about people and they do intercept data. It’s important to make sure they are doing those things in the way they should and not overstepping.” Felten added that the board has the authority to review programs that require secrecy. “The public has limited visibility into some of these programs,” Felten said. “The board’s job is to look out for the public interest.”

On October 24, 2018, the National Academies of Sciences, Engineering, and Medicine Forum on Aging, Disability, and Independence will host a workshop in Washington, DC that will explore the potential of artificial intelligence (AI) to foster a balance of safety and autonomy for older adults and people with disabilities who strive to live as independently as possible http://nationalacademies.org/hmd/Activities/Aging/AgingDisabilityForum/2018-OCT-24.aspx

According to Reuters, Amazon scrapped an AI recruiting tool that showed bias against women in automated employment screening.

ML Safety by Design

In a recent post, we discussed the need for policymakers to think of AI and Autonomous Systems (AI/AS) always needing varying degrees of the human role (“hybrid” human/machine systems). Understanding the potential and limitations of combining technologies and humans is important for realistic policymaking. A key element, along with accurate forecasts of the changes in technology, is the safety of AI/AS-Human products as discussed in the IEEE report “Ethically Aligned Design”, which is subtitled “A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems”, and Ben Shneiderman’s excellent summary and comments on the report as well as the YouTube video of his Turing Institute Lecture on “Algorithmic Accountability: Design for Safety”.

In Shneiderman’s proposal for a National Algorithms Safety Board, he writes “What might help are traditional forms of independent oversight that use knowledgeable people who have powerful tools to anticipate, monitor, and retrospectively review operations of vital national services. The three forms of independent oversight that have been used in the past by industry and governments—planning oversight, continuous monitoring by knowledgeable review boards using advanced software, and a retrospective analysis of disasters—provide guidance for responsible technology leaders and concerned policy makers. Considering all three forms of oversight could lead to policies that prevent inadequate designs, biased outcomes, or criminal actions.”

Efforts to provide “safety by design” include work at Google on Human-Centered Machine Learning and a general “human-centered approach that foregrounds responsible AI practices and products that work well for all people and contexts. These values of responsible and inclusive AI are at the core of the AutoML suite of machine learning products …”
Further work is needed to systemize and enforce good practices in human-centered AI design and development, including algorithmic transparency and guidance for selection of unbiased data used in machine learning systems.

2018 ACM SIGAI Student Essay Contest on Artificial Intelligence Technologies

After the success of our 2017 version of the contest we are happy to announce another round of the ACM SIGAI Student Essay Contest on Artificial Intelligence Technologies!

Download a PDF of the call here: https://tinyurl.com/SIGAIEssay2018

Win one of several $500 monetary prizes or a Skype conversation with a leading AI researcher including Joanna Bryson, Murray Campbell, Eric Horvitz, Peter Norvig, Iyad Rahwan, Francesca Rossi, or Toby Walsh.

We have extended the deadline to February 15th, 2019, Anywhere on Earth Time Zone.  Please get your submissions in!!

Students interested in these topics should consider submitting to the 2019 Artificial Intelligence, Ethics, and Society Conference and/or Student Program — Deadline is in early November.  See the website for all the details.

2018 Topic

The ACM Special Interest Group on Artificial Intelligence (ACM SIGAI) supports the development and responsible application of Artificial Intelligence (AI) technologies. From intelligent assistants to self-driving cars, an increasing number of AI technologies now (or soon will) affect our lives. Examples include Google Duplex (Link) talking to humans, Drive.ai (Link) offering rides in US cities, chatbots advertising movies by impersonating people (Link), and AI systems making decisions about parole (Link) and foster care (Link). We interact with AI systems, whether we know it or not, every day.

Such interactions raise important questions. ACM SIGAI is in a unique position to shape the conversation around these and related issues and is thus interested in obtaining input from students worldwide to help shape the debate. We therefore invite all students to enter an essay in the 2018 ACM SIGAI Student Essay Contest, to be published in the ACM SIGAI newsletter “AI Matters,” addressing one or both of the following topic areas (or any other question in this space that you feel is important) while providing supporting evidence:

  • What requirements, if any, should be imposed on AI systems and technology when interacting with humans who may or may not know that they are interacting with a machine?  For example, should they be required to disclose their identities? If so, how? See, for example, “Turing’s Red Flag” in CACM (Link).
  • What requirements, if any, should be imposed on AI systems and technology when making decisions that directly affect humans? For example, should they be required to make transparent decisions? If so, how?  See, for example, the IEEE’s summary discussion of Ethically Aligned Design (Link).

Each of the above topic areas raises further questions, including

  • Who is responsible for the training and maintenance of AI systems? See, for example, Google’s (Link), Microsoft’s (Link), and IBM’s (Link) AI Principles.
  • How do we educate ourselves and others about these issues and possible solutions? See, for example, new ways of teaching AI ethics (Link).
  • How do we handle the fact that different cultures see these problems differently?  See, for example, Joi Ito’s discussion in Wired (Link).
  • Which steps can governments, industries, or organizations (including ACM SIGAI) take to address these issues?  See, for example, the goals and outlines of the Partnership on AI (Link).

All sources must be cited. However, we are not interested in summaries of the opinions of others. Rather, we are interested in the informed opinions of the authors. Writing an essay on this topic requires some background knowledge. Possible starting points for acquiring such background knowledge are:

  • the revised ACM Code of Ethics (Link), especially Section 3.7, and a discussion of why the revision was necessary (Link),
  • IEEE’s Ethically Aligned Design (Link), and
  • the One Hundred Year Study on AI and Life in 2030 (Link).

ACM and ACM SIGAI

ACM brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. As the world’s largest computing society, ACM strengthens the profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM’s reach extends to every part of the globe, with more than half of its 100,000 members residing outside the U.S.  Its growing membership has led to Councils in Europe, India, and China, fostering networking opportunities that strengthen ties within and across countries and technical communities. Their actions enhance ACM’s ability to raise awareness of computing’s important technical, educational, and social issues around the world. See https://www.acm.org/ for more information.

ACM SIGAI brings together academic and industrial researchers, practitioners, software developers, end users, and students who are interested in AI. It promotes and supports the growth and application of AI principles and techniques throughout computing, sponsors or co-sponsors AI-related conferences, organizes the Career Network and Conference for early-stage AI researchers, sponsors recognized AI awards, supports AI journals, provides scholarships to its student members to attend conferences, and promotes AI education and publications through various forums and the ACM digital library. See https://sigai.acm.org/ for more information.

Format and Eligibility

The ACM SIGAI Student Essay Contest is open to all ACM SIGAI student members at the time of submission.  (If you are a student but not an ACM SIGAI member, you can join ACM SIGAI before submission for just US$ 11 at https://goo.gl/6kifV9 by selecting Option 1, even if you are not an ACM member.) Essays can be authored by one or more ACM SIGAI student members but each ACM SIGAI student member can (co-)author only one essay.

All authors must be SIGAI members at the time of submission.  All submissions not meeting this requirement will not be reviewed.

Essays should be submitted as pdf documents of any style with at most 5,000 words via email to https://easychair.org/conferences/?conf=acmsigai2018.

The deadline for submissions is January 10th, 2019.

We have extended the deadline to February 15th, 2019, Anywhere on Earth Time Zone.  Please get your submissions in!!

The authors certify with their submissions that they have followed the ACM publication policies on “Author Representations,” “Plagiarism” and “Criteria for Authorship” (http://www.acm.org/publications/policies/). They also certify with their submissions that they will transfer the copyright of winning essays to ACM.

Judges and Judging Criteria

Winning entries from last year’s essay contest can be found in recent issues of the ACM SIGAI newsletter “AI Matters,” specifically  Volume 3, Issue 3: http://sigai.acm.org/aimatters/3-3.html and  Volume 3, Issue 4: http://sigai.acm.org/aimatters/3-4.html.

Entries will be judged by the following panel of leading AI researchers and ACM SIGAI officers. Winning essays will be selected based on depth of insight, creativity, technical merit, and novelty of argument. All decisions by the judges are final.

    • Rediet Abebe, Cornell University
    • Emanuelle Burton, University of Illinois at Chicago
    • Sanmay Das, Washington University in St. Louis  
    • John P. Dickerson, University of Maryland
    • Virginia Dignum, Delft University of Technology
    • Tina Eliassi-Rad, Northeastern University
    • Judy Goldsmith, University of Kentucky
    • Amy Greenwald, Brown University
    • H. V. Jagadish, University of Michigan
    • Sven Koenig, University of Southern California  
    • Benjamin Kuipers, University of Michigan  
    • Nicholas Mattei, IBM Research
    • Alexandra Olteanu, Microsoft Research
    • Rosemary Paradis, Leidos
    • Kush Varshney, IBM Research
    • Roman Yampolskiy, University of Louisville
  • Yair Zick, National University of Singapore  

Prizes

All winning essays will be published in the ACM SIGAI newsletter “AI Matters.” ACM SIGAI provides five monetary awards of USD 500 each as well as 45-minute skype sessions with the following AI researchers:

    • Joanna Bryson, Reader (Assoc. Prof) in AI, University of Bath
    • Murray Campbell, Senior Manager, IBM Research AI
    • Eric Horvitz, Managing Director, Microsoft Research
    • Peter Norvig, Director of Research, Google
    • Iyad Rahwan, Associate Professor, MIT Media Lab and Head of Scalable Corp.
    • Francesca Rossi, AI and Ethics Global Lead, IBM Research AI
  • Toby Walsh, Scientia Professor of Artificial Intelligence, UNSW Sydney, Data61 and TU Berlin

One award is given per winning essay. Authors or teams of authors of winning essays will pick (in a pre-selected order) an available skype session or one of the monetary awards until all skype sessions and monetary awards have been claimed. ACM SIGAI reserves the right to substitute a skype session with a different AI researcher or a monetary award for a skype session in case an AI researcher becomes unexpectedly unavailable. Some prizes might not be awarded in case the number of high-quality submissions is smaller than the number of prizes.

Questions?

In case of questions, please first check the ACM SIGAI blog for announcements and clarifications: https://sigai.acm.org/aimatters/blog/. You can also contact the ACM SIGAI Student Essay Contest Organizers at sigai@member.acm.org.

  • Nicholas Mattei (IBM Research) – ACM SIGAI Student Essay Contest Organizer and AI and Society Officer

with involvement from

    • Sven Koenig (University of Southern California), ACM SIGAI Chair
    • Sanmay Das (Washington University in St. Louis), ACM SIGAI Vice Chair
    • Rosemary Paradis (Leidos), ACM SIGAI Secretary/Treasurer
    • Benjamin Kuipers (University of Michigan), ACM SIGAI Ethics Officer
  • Amy McGovern (University of Oklahoma), ACM SIGAI AI Matters Editor-in Chief

WEF Report on the Future of Jobs

The World Economic Forum recently released a report on the future of jobs. Their analyses refer to the Fourth Industrial Revolution and their Centre for the Fourth Industrial Revolution.
The report states that
“The Fourth Industrial Revolution is interacting with other socio-economic and demographic factors to create a perfect storm of business model change in all industries, resulting in major disruptions to labour markets. New categories of jobs will emerge, partly or wholly displacing others. The skill sets required in both old and new occupations will change in most industries and transform how and where people work. It may also affect female and male workers differently and transform the dynamics of the industry gender gap.
The Future of Jobs Report aims to unpack and provide specific information on the relative magnitude of these trends by industry and geography, and on the expected time horizon for their impact to be felt on job functions, employment levels and skills.”

The report concludes that by 2022 more jobs can be created than the number lost but that various stakeholders, including those making education policy, must make wise decisions.

Vehicle automation: safe design, scientific advances, and smart policy

Following previous policy posts on terminology and popular discourse about AI, the focus today is on the impact on policy of the way we talk about automation. “Unmanned Autonomous Vehicle (UAV)” is a term that justifiably creates fear in the general public, but talk about a UAV usually misses the roles of humans and human decision making. Likewise, discussions about an “automated decision maker (ADM)” ignores the social and legal responsibility of those who design, manufacture, implement, and operate “autonomous” systems. The AI community has an important role to influence correct and realistic use of concepts and issues in discussions of science and technology systems that increase automation. The concept “hybrid system” might be helpful here for understanding the potential and limitations of combinations of technologies – and humans – in AI and Autonomous Systems (AI/AS) requiring less from humans over time.

Safe Design

In addition to avoiding confusion and managing expectations, design approaches and analyses of the performance of existing systems with automation are crucial to developing safe systems with which the public and policymakers can feel comfortable. In this regard, stakeholders should read information on design of systems with automation components, such as the IEEE report “Ethically Aligned Design”, which is subtitled “A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems”. The report says about AI and Autonomous Systems (AI/AS) , “We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles. AI/AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between humans and our technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.” See also Ben Shneiderman’s excellent summary and comments on the report as well as the YouTube video of his Turing Institute Lecture on “Algorithmic Accountability: Design for Safety”. See also his proposal for a National Algorithms Safety Board.

Advances in AI/AS Science and Technology

Another perspective on the automation issue is the need to increase safety of systems through advances in science and technology. In a future blog, we will present the transcript of an interview with Dr. Harold Szu, about the need for a next generation of AI that moves closer to brain-style computing that incorporates human behaviors into AI/AS systems. Dr. Szu was the founder and former president, and former governor, of the International Neural Network Society. He is acknowledged for outstanding contributions to ANN applications and scientific innovations.

Policy and Ethics

Over the summer 2018, increased activity in congress and state legislatures  focused on understandings, accurate and not, of “unmanned autonomous vehicles” and what policies should be in place. The following examples are interesting for possible interventions, but also for the use of AI/AS terminology:

House Energy & Commerce Committee’s press release: the SELF DRIVE Act.
CNBC Commentary by Reps. Bob Latta (R-OH) and Jan Schakowsky (D-IL).

Politico, 08/03/2018.: “Trial lawyers speak out on Senate self-driving car bill”, by Brianna Gurciullo with help from Lauren Gardner.
“AV NON-STARTER: After being mum for months, the American Association for Justice said publicly Thursday that it has been pressing for the Senate’s self-driving car bill, S. 1885 (115) (definitions on p.42), to stipulate that companies can’t force arbitration, our Tanya Snyder reports for Pros. The trial lawyers group is calling for a provision to make sure ‘when a person, whether a passenger or pedestrian, is injured or killed by a driverless car, that person or their family is not forced into a secret arbitration proceeding,’ according to a statement. Senate Commerce Chairman John Thune (R-S.D.) has said that arbitration has been ‘a thorny spot’ in bill negotiations.”