Policy in the News

The Computing Community Consortium (CCC) announced a new initiative to create a Roadmap for Artificial Intelligence. SIGAI’s Yolanda Gil (University of Southern California and President-Elect of AAAI) will work with Bart Selman (Cornell University) to lead the effort. The initiative will support the U.S. Administrations’ efforts in this area and involve academic and industrial researchers to help map a course for needed research in AI. They will hold a series of workshops in 2018 and 2019 to produce the Roadmap by Spring of 2019. The Computing Research Association (CRA) has been involved in shaping public policy of relevance to computing research for more than two decades https://cra.org/govaffairs/blog/ The CRA Government Affairs program has enhanced its efforts to help the members of the computing research community contribute to the public debate knowledgeably and effectively.

Ed Felten, Princeton Professor of Computer Science and Public Affairs, has been confirmed by the U.S. Senate to be a member of the U.S. Privacy and Civil Liberties Oversight Board, a bipartisan agency within the executive branch. He will serve as a part-time member of the board while continuing his teaching and research at Princeton. The five-person board is charged with evaluating and advising on executive branch anti-terrorism measures with respect to privacy and civil liberties. “It is a very important issue,” Felten said. “Federal agencies, in the course of doing national security work, have access to a lot of data about people and they do intercept data. It’s important to make sure they are doing those things in the way they should and not overstepping.” Felten added that the board has the authority to review programs that require secrecy. “The public has limited visibility into some of these programs,” Felten said. “The board’s job is to look out for the public interest.”

On October 24, 2018, the National Academies of Sciences, Engineering, and Medicine Forum on Aging, Disability, and Independence will host a workshop in Washington, DC that will explore the potential of artificial intelligence (AI) to foster a balance of safety and autonomy for older adults and people with disabilities who strive to live as independently as possible http://nationalacademies.org/hmd/Activities/Aging/AgingDisabilityForum/2018-OCT-24.aspx

According to Reuters, Amazon scrapped an AI recruiting tool that showed bias against women in automated employment screening.

ML Safety by Design

In a recent post, we discussed the need for policymakers to think of AI and Autonomous Systems (AI/AS) always needing varying degrees of the human role (“hybrid” human/machine systems). Understanding the potential and limitations of combining technologies and humans is important for realistic policymaking. A key element, along with accurate forecasts of the changes in technology, is the safety of AI/AS-Human products as discussed in the IEEE report “Ethically Aligned Design”, which is subtitled “A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems”, and Ben Shneiderman’s excellent summary and comments on the report as well as the YouTube video of his Turing Institute Lecture on “Algorithmic Accountability: Design for Safety”.

In Shneiderman’s proposal for a National Algorithms Safety Board, he writes “What might help are traditional forms of independent oversight that use knowledgeable people who have powerful tools to anticipate, monitor, and retrospectively review operations of vital national services. The three forms of independent oversight that have been used in the past by industry and governments—planning oversight, continuous monitoring by knowledgeable review boards using advanced software, and a retrospective analysis of disasters—provide guidance for responsible technology leaders and concerned policy makers. Considering all three forms of oversight could lead to policies that prevent inadequate designs, biased outcomes, or criminal actions.”

Efforts to provide “safety by design” include work at Google on Human-Centered Machine Learning and a general “human-centered approach that foregrounds responsible AI practices and products that work well for all people and contexts. These values of responsible and inclusive AI are at the core of the AutoML suite of machine learning products …”
Further work is needed to systemize and enforce good practices in human-centered AI design and development, including algorithmic transparency and guidance for selection of unbiased data used in machine learning systems.