News from AAAI FSS-17

This year’s Fall Symposium Series (November 9-11) provided updates and insights on advances in research and technology, including resources for discussion of AI policy issues.  The symposia addressed topics in human-robot interaction, cognitive assistance in government and public sectors, military applications, human-robot collaboration, and a standard model of the mind. An important theme for public policy was the advances and questions on human-AI collaboration.

The cognitive assistance sessions this year focused on government and public sector applications, particularly autonomous systems, healthcare, and education. Human-technology collaboration advances involved discussions of issues relevant to public policy, including privacy and algorithmic transparency. The increasing mix of AI with humans in ubiquitous public and private systems was the subject of discussions about new technological developments and the need for understanding and anticipating challenges for communication and collaboration. Particular issues were on jobs and de-skilling of the workforce, credit and blame when AI applications work or fail, and the role of humans with autonomous systems.

IBM’s Jim Spohrer made an outstanding presentation “A Look Toward the Future”, incorporating his rich experience and current work on anticipated impacts of new technology. His slides are well worth studying, especially for the role of hardware in game-changing technologies with likely milestones every ten years through 2045. Radical developments in technology would challenge public policy in ways that are difficult to imagine, but current policymakers and the AI community need to try.

Particular takeaways, and anticipated subjects for future blogs, are about the importance of likely far-reaching research and applications on public policy. The degree and nature of cognitive collaboration with machines, the future of jobs, new demands on educational systems as cognitive assistance becomes deep and pervasive, and the anticipated radical changes in AI capabilities put the challenges to public policy in a new perspective. AI researchers and developers need to partner with social scientists to anticipate communication and societal issues as human-machine collaboration accelerates, both in system development teams and in the new workforce.

Some recommended topics for thinking about AI technology and policy are the following:
Jim Spohrer’s slideshare
Noriko Arai’s TED talk on Todai Robot
Humans, Robotics, and the Future of Manufacturing
New education systems and the future of work
Computing education: Coding vs. learning to use systems
Smart phone app “Seeing AI
AAAI for information related to science policy issues.

Public Policy Opportunities

USACM Council
The membership of USACM will be voting soon to elect at-large representatives to the USACM Council, with terms starting January 1st. At-large Council members whose terms expire this December 31st are Jean Camp, Simson Garfinkel, and Jonathan Smith. If you are a member of USACM and are interested in serving on USACM Council, please contact a member of the nominations committee. If there is someone is in line with what you think USACM should be doing, then please nominate that person. Only those who have been USACM members for at least one year as of January 1, 2018, are eligible. The deadline for having a slate of candidates is November 13th.

ACM Policy Award
Consider nominating someone for this award, which is made in alternate years and the initial one is yet to be made because insufficient nominations were received the first time around. “The ACM Policy Award was established in 2014 to recognize an individual or small group that had a significant positive impact on the formation or execution of public policy affecting computing or the computing community. This can be for education, service, or leadership in a technology position; for establishing an innovative program in policy education or advice; for building the community or community resources in technology policy; or other notable policy activity. The award is accompanied by a $10,000 prize.” Further information and instructions are available at http://awards.acm.org/policy/nominations.
The award can recognize one or more of the following:
– Contributions to policy while working in a policy position
– Distinguished service on and contributions to policy issues
– Advanced scholarly work that has impacted policy
The deadline for nominations is January 15, 2018.

Missed Opportunities — Federal Science Policy Offices
I reached out to people who might know of prospects for the current Administration to make important policy position appointments.
Not much to report:
1. The Administration has yet to nominate a Director for the White House Office of Science and Technology Policy (OSTP). OSTP director traditionally serves as the president’s science adviser.
2. Office of the Chief Technology Officer is also vacant. In the past, the CTO team helps shape Federal policies, initiatives, capacity, and investments that support the mission of harnessing the power of technology. They have also worked to anticipate and guard against the consequences that can accompany new discoveries and technologies.
3. The U.S. Department of Agriculture’s chief scientist nominee, Sam Clovis, recently withdrew his name from consideration. Clovis is a climate change denier with no training in science, food, or agriculture. For months, scientists, activists, and a broad coalition of groups have come together to demand that the Senate reject his nomination.

AAAS Policy News
For timely and objective information on current science and technology issues and assistance in understanding Federal science policy, check with the AAAS Office of Government Relations at https://www.aaas.org/program/govrelations
and the AAAS Policy and Public Statements at https://www.aaas.org/about/policy-and-public-statements.

Joint Panel of ACM and IEEE

The new joint ACM/IEEE group met recently via conference calls to explore the idea of proposing a session at the 2018 RightsCon in Toronto on a topic of mutual interest to the two organizations’ ethics and policy members. Your SIGAI members Simson Garfinkel, Sven Koenig, Nick Mattei, and Larry Medsker are participating in the group. Stuart Shapiro, Chair of ACM US Public Policy Council, is representing ACM. Members from IEE include John C. Havens, Executive Director of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and Dr. Ansgar Koene, University of Nottingham and working group chair for IEEE Standard on Algorithm Bias Considerations.

The group meets again soon to propose a panel in the area of  bias and algorithmic accountability. SIGAI members are welcome to nominate panel members and volunteer. SIGAI members are also encouraged to contribute ideas that could focus the discussion and meet the following RightsCon goals:
– including speakers from a diverse range of backgrounds
– addressing an important challenge to human rights in the digital age
– engaging participants in a way that inspires real-world outcomes
(e.g., new policy approaches and innovative technology solutions)
– introducing new voices, new concepts, and fresh take on an issue
– having the potential to encourage cross-sector collaborations
– using an innovative format to present the idea and generate outcomes

The call for proposals mentions “Artificial Intelligence, Automation, and Algorithmic Accountability” as one of their program “buckets”. RightsCon is accepting presentation proposals until November 24, 2017. Sessions will have 16 program buckets, which cover topics including Digital Security and Encryption, Artificial Intelligence, Automation, Algorithmic Accountability, Misinformation, Journalism, and the Future of Online Media.

Computing Community Consortium

On October 23-24, 2017, the Computing Community Consortium (CCC) will hold the Computing Research: Addressing National Priorities and Societal Needs Symposium to address the current and future contribution of computing and its role in issues of societal needs.

Computing Community Consortium says it “has hosted dozens of research visioning workshops to imagine, discuss, and debate the future of computing and its role in addressing societal needs. The second CCC Computing Research symposium draws these topics into a program designed to illuminate current and future trends in computing and the potential for computing to address national challenges.”

You may also want to check out the CCC Blog at http://www.cccblog.org/ for policy issues of common interest for SIGAI members.

IEEE and ACM Collaborations on ATA

At last month’s USACM Panel at the National Press Club (reported in the AI Matters policy blog last time), I had the opportunity to talk with one of the panelists Dr. Ansgar Koene, Senior Research Fellow: UnBias, CaSMa & Horizon Policy Impact. Ansgar is at the Horizon Digital Economy Research Institute, University of Nottingham, and he is the working group chair for IEEE Standard on Algorithm Bias Considerations. Be sure to see Ansgar’s article about the ‘AI gaydar’ in Conversation: https://theconversation.com/machine-gaydar-ai-is-reinforcing-stereotypes-that-liberal-societies-are-trying-to-get-rid-of-83837.

Following the USACM Panel at the National Press Club, attendees discussed ways to bring together the voices of ACM and IEEE on Algorithmic Transparency and Accountability. One opportunity is at  RightsCon Toronto: May 16-18, 2018. The call for proposals mentions “Artificial Intelligence, Automation, and Algorithmic Accountability” as one of their program “buckets”. RightsCon is accepting proposals for presentations until November 24, 2017. Sessions will have 16 program buckets, which cover topics including Digital Security and Encryption, Artificial Intelligence, Automation, and Algorithmic Accountability to Misinformation, Journalism, and the Future of Online Media.

A new initiative is Local Champions at RightsCon Toronto, which features leading voices in Canada’s digital rights landscape. They plan to support thought leadership, program guidance, and topic identification to ensure that the most pressing issues are represented at RightsCon.

Dr. Koene also shared information about the IEEE P7001 Working Group on the IEEE Standard on Transparency of Autonomous Systems http://sites.ieee.org/sagroups-7001/. This working group is chaired by Prof. Alan Winfield who is also very interested in the idea of data recorders, like airplane ‘black boxes’, to provide insight into behavior of autonomous vehicles for accident investigation. http://www.cems.uwe.ac.uk/~a-winfield/

Please share additional opportunities for SIGAI members to join with other groups working on issues in algorithmic transparency and accountability. We welcome also your comments on the many AI applications and technologies that should be included in our focus on public policy.

National Press Club USACM Panel

Your Public Policy Officer attended the USACM Panel on Algorithmic Transparency and Accountability on Thursday, Sept 14th at the National Press Club. The panelists were moderator Simson Garfinkel, Jeanna Neefe Matthews, Nicholas Diakopoulos, Dan Rubins, Geoff Cohen, and Ansgar Koene. USACM Chair Stuart Shapiro opened the event, and Ben Sneiderman provided comments from the audience.

USACM and EUACM have identified and codified a set of principles intended to ensure fairness in this evolving policy and technology ecosystem. These were a focus of the panel discussion and are as follows:(1) awareness;
(2) access and redress;
(3) accountability;
(4) explanation;
(5) data provenance;
(6) audit-ability; and
(7) validation and testing.
See also the full letter in the September, 2017, issue of CACM.

The panel and audience discussion ranged from frameworks for evaluating algorithms and creating policy for fairness to examples of algorithmic abuse. Language for clear communication with the public and policymakers, as well as even scientists, was a concern — as has been discussed in our Public Policy blog.  Algorithms in the strict sense may not always be the issue, but rather the data used to build and train a system, especially when the system is used for prediction and decision making. Much was said about the types of bias and unfairness that can be embedded in modern AI and machine learning systems. The essence of the concerns includes the ability to explain how a system works, the need to develop models of algorithmic transparency, and how policy or an independent clearinghouse might identify fair and problematic algorithmic systems.

Please read more about the panel discussion at https://www.acm.org/public-policy/algorithmic-panel
and
watch the very informative YouTube video of the panel at https://www.youtube.com/watch?v=DDW-nM8idgg&feature=youtu.be

September Policy Events

Please note AI policy issues getting national attention. Look for replays and videos if you cannot attend or view live events.

Artificial Intelligence, Automation, and Jobs Panelists at the Technology Policy Institute’s 2017 Aspen Forum talk about the impact of artificial intelligence and automation on jobs. Speakers included authors and educators, Google’s chief economist, and a Microsoft AI research specialist. C-SPAN 1 Program ID: 432196-2
Airing Details • Sep 03, 2017 | 12:47pm EDT | C-SPAN 1 • Sep 04, 2017 | 10:19pm EDT |

Experts to Explore Far-Reaching Impact of Algorithms on Society and Best Strategies to Prevent Algorithmic Bias.
USACM will be hosting a panel event on algorithmic transparency and accountability on Thursday, September 14 from 9am to 10:30am at the National Press Club in Washington, DC.  Experts Ansgar Koene (University of Nottingham), Dan Rubins (Legal Robot), Geoff A. Cohen (Stroz Friedberg), Jeanna Matthews (Clarkson University), and Nicholas Diakopoulos (Northwestern University) will be discussing the impact of algorithmic decision-making in society and the technical underpinnings of algorithmic models. The panel will be moderated by Simson Garfinkel, Co-chair of USACM’s Working Group on Algorithmic Transparency and Accountability. https://www.acm.org/media-center/2017/august/usacm-ata-panel-media-advisory 

Predictive Policing and Beyond

In the August 1 post, I offered a more detailed view of “algorithm” in “Algorithmic Transparency”, particularly in some machine learning software. The example was about systems involving neural networks, where algorithms in the technical sense are likely not the cause of concern, but the data used to train the system could lead to policy issues. On the other hand, “predictive” algorithms in systems are potentially a problem and need to be transparent and explained. They are susceptible to unintentional — and intentional — human bias and misuse. Today’s post gives a particular example.

Predictive policing software, popular and useful in law enforcement offices, is particularly prone to issues of bias, accuracy, and misuse. The algorithms are written to determine propensity to commit a crime and where crime might occur. Policy concerns are related to skepticism about the efficacy and fairness of such systems, and thus accountability and transparency are very important.

As stated in Slate, “The Intercept published a set of documents from a two-day event in July hosted by the U.S. Immigration and Customs Enforcement’s Homeland Security Investigations division, where tech companies were invited to learn more about the kind of software ICE is looking to procure for its new ‘Extreme Vetting Initiative.’ According to the documents, ICE is in the market for a tool that it can use to predict the potential criminality of people who come into the country.” Further information on the Slate article is available here.

The AI community should help investigate algorithmic accountability and transparency in the case of predictive policing and the subsequent application of the algorithms to new areas. We should then discuss our SIGAI position and public policy.

Algorithms and Algorithmic Transparency

Our July 15th post summarized the USACM-EUACM joint statement on Algorithmic Transparency and Accountability (ATA) and introduced the ATA FAQ project by the USACM Algorithms Working Group. Their goal is “to take the lead addressing the technical aspects of algorithms and to have content prepared for media inquiries and policymakers.” The SIGAI has been asked to contribute expertise in developing content for the FAQ. Please comment to this posting so we can collect and share insights with USACM. You can also send your ideas and suggestion directly to Cynthia Florentino, ACM Policy Analyst, at cflorentino@acm.org.

The focus of this post is the discussion of “algorithms” in the FAQ. Your feedback will be appreciated. Some of the input we received is as follows:
“Q: What is an algorithm?
A: An algorithm is a set of well-defined steps that leads from inputs (data) to outputs (results). Today, algorithms are used in decision-making in education, access to credit, employment, and in the criminal justice system.  An algorithm can be compared to a recipe that runs in the same way each time, automatically using the given input data. The input data is combined and placed through the same set of steps, and the output is dependent on the input data and the set of steps that comprise the algorithm.”
and
“Q: Can algorithms be explained? Why or why not?  What are the challenges?
A: It is not always possible to interpret machine learning and algorithmic models. This is because a model may use an enormous volume of data in the process of figuring out the ideal approach. This in turn, makes it hard to go back and trace how the algorithm arrived at a certain decision.”

This post raises an issue with the use of the term “algorithm” in the era of Big Data in which the term “machine learning” has been incorporated into the field of data analytics and data science. The AI community needs, in the case of the ATA issues, to give careful attention to definitions and concepts that enables a clear discourse on ATA policy.

A case in point, and we welcome input of SIGAI, is the central role of artificial neural networks (NN) in machine learning and deep learning. In what sense is a NN algorithmic? Toward the goal of algorithmic transparency, what needs to be explained about how a NN works? From a policy perspective, what are the challenges in addressing the transparency of a NN component of machine learning frameworks with audiences of varying technical backgrounds?

The mechanisms for training neural networks are algorithmic in the traditional sense of the word by using a series of steps repeatedly in the adjustment of parameters such as in multilayer perceptron learning. The algorithms in NN training methods operate the same way for all specific applications in which input data is mapped to output results. Only a high-level discussion and use of simplified diagrams are practical for “explaining” these NN algorithms to policymakers and end users of systems involving machine learning.

On the other hand, the design and implementation of applications involving NN-based machine learning are surely the real points of concern for issues of “algorithmic transparency”. In that regard, the “explanation” of a particular application could discuss the careful description of a problem to be solved and the NN design model chosen to solve the problem. Further, (for now) human choices are made about the number and types of input items and the numbers of nodes and layers, method for cleaning and normalizing input data, choice of an appropriate error measure and number of training cycles, appropriate procedure for independent testing, and the interpretation of results with realistic uncertainty estimates. The application development procedure is algorithmic in a general sense, but the more important point is that assumptions and biases are involved in the design and implementation of the NN. The choice of data, and its relevance and quality, are eminently important in understanding the validity of a system involving machine learning. Thus, the transparency of NN algorithms, in the technical sense, might well be explained, but the transparency and biases of the model and implementation process are the aspects with serious policy consequences.

We welcome your feedback!

USACM ATA FAQ

In the SIGAI June blog posts, we covered the USACM-EUACM joint statement on Algorithmic Transparency and Accountability (ATA). This topic is being actively discussed online and in public presentations. An interesting development is an FAQ project by the USACM Algorithms Working Group, which aims “to take the lead addressing the technical aspects of algorithms and to have content prepared for media inquiries and policymakers.” The FAQ could also help raise the profile of USACM’s work if stakeholders look to it for answers on the technical underpinnings of algorithms. The questions build on issues raised in the USACM-EUACM joint statement on ATA. The briefing materials will also support a forthcoming USACM policy event.

The FAQ is interesting in its own right, and an AI Matters blog discussion could be helpful to USACM and the ongoing evolution of the ATA issue. Please make Comment to this posting so we can collect and share your input with USACM. You can also send your ideas and suggestions directly with Cynthia Florentino, ACM Policy Analyst, at cflorentino@acm.org.

Below are the questions being discussed. The USACM Working Group will appreciate the input from SIGAI. I hope you enjoy thinking about these questions and the ideas around the issue of algorithmic transparency and accountability.

Current Questions in the DRAFT Working Document
Frequently Asked Questions
USACM Statement on Algorithmic Transparency and Accountability

Q: What is an algorithm?

Q: Can algorithms be explained? Why or why not? ? Why or why not? What are the challenges?

Q: What are the technical challenges associated with data inputs to an algorithm?

Q: What are machine learning models?

Q: What are neural networks?

Q: What are decision trees?

Q: How can we introduce checks and balances into the development and operation of software to make it impartial?

Q: When trying to introduce checks and balances, what is the impact of AI algorithms that are unable to export an explanation of their decision

Q:What lies ahead for algorithms?

Q: Who is the intended audience?

Q: Are these principles just for the US, or are they intended to applied world-wide?

Q: Are these principles for government or corporations to follow?

Q: Where did you get the idea for this project?

Q: What kind of decisions are being made by computers today?

Q: Can you give examples of biased decisions made by computer?

Q: Why is there resistance to explaining the decisions made by computer

Q: Who is responsible for biased decisions made with input from a machine learning algorithm?

Q: What are sources of bias in algorithmic decision making?

Q: What are some examples of the data sets used to train machine learning algorithms that contain bias?

Q: Human decision makers can be biased as well. Are decisions made by computers more or less biased?

Q: Can algorithms be biased even if they do not look at protected characteristics like race, gender, disability status, etc?

Q: What are some examples of proprietary algorithms being used to make decisions of public interest?

Q: Are there other sets of principles in this area?

Q: Are there other organizations is working in this area?

Q: Are there any academic courses in this area?

*********

Your suggestions will be collected and sent to the USACM Algorithms Working Group, and  you can share your input directly with Cynthia Florentino, ACM Policy Analyst

China Matters

In a recent post, AI Matters welcomed ACM SIGAI China and its members as a chapter of ACM SIGAI.  Prof. Le Dong, University of Electronic Science and Technology of China, is the Chair of SIGAI China. The AI Matters policy blog will be exploring areas of common interest in AI policy and issues for discussions in future postings.

As their first event, ACM SIGAI China held the Symposium on New Challenges and Opportunities in the Post-Turing AI Era in May, 2017, as part of the ACM Turing 50th Celebration Conference in Shanghai. Keynote presentations addressed the challenges of bringing robotic and other AI technologies into practice, including a keynote by our own Prof. Sven Koenig on timely decision making by robots and other agents in their environments.

The Symposium included workshops that particularly relate to policy issues. The Career of the Young in the Emerging Field featured rising new scientists discussing the human responsibilities and challenges that accompany the many career opportunities in AI. The Gold-Rush Again to Western China: When ACM Meets B&R workshop focused on the Belt and Road Initiative for a Trans-Eurasia, across-ocean economic strategy and the related opportunities for computer science. The IoT and Cyberspace Security workshop explored opportunities and issues in areas of vehicular sensor networks, traffic management, intelligent and green transportation, and collection of data on people and things for operating the urban infrastructure.

We look forward to interactions with our colleagues in the ACM SIGAI China as we explore policy issues along with discussing cutting-edge research in artificial intelligence.