September Policy Events

Please note AI policy issues getting national attention. Look for replays and videos if you cannot attend or view live events.

Artificial Intelligence, Automation, and Jobs Panelists at the Technology Policy Institute’s 2017 Aspen Forum talk about the impact of artificial intelligence and automation on jobs. Speakers included authors and educators, Google’s chief economist, and a Microsoft AI research specialist. C-SPAN 1 Program ID: 432196-2
Airing Details • Sep 03, 2017 | 12:47pm EDT | C-SPAN 1 • Sep 04, 2017 | 10:19pm EDT |

Experts to Explore Far-Reaching Impact of Algorithms on Society and Best Strategies to Prevent Algorithmic Bias.
USACM will be hosting a panel event on algorithmic transparency and accountability on Thursday, September 14 from 9am to 10:30am at the National Press Club in Washington, DC.  Experts Ansgar Koene (University of Nottingham), Dan Rubins (Legal Robot), Geoff A. Cohen (Stroz Friedberg), Jeanna Matthews (Clarkson University), and Nicholas Diakopoulos (Northwestern University) will be discussing the impact of algorithmic decision-making in society and the technical underpinnings of algorithmic models. The panel will be moderated by Simson Garfinkel, Co-chair of USACM’s Working Group on Algorithmic Transparency and Accountability. https://www.acm.org/media-center/2017/august/usacm-ata-panel-media-advisory 

Predictive Policing and Beyond

In the August 1 post, I offered a more detailed view of “algorithm” in “Algorithmic Transparency”, particularly in some machine learning software. The example was about systems involving neural networks, where algorithms in the technical sense are likely not the cause of concern, but the data used to train the system could lead to policy issues. On the other hand, “predictive” algorithms in systems are potentially a problem and need to be transparent and explained. They are susceptible to unintentional — and intentional — human bias and misuse. Today’s post gives a particular example.

Predictive policing software, popular and useful in law enforcement offices, is particularly prone to issues of bias, accuracy, and misuse. The algorithms are written to determine propensity to commit a crime and where crime might occur. Policy concerns are related to skepticism about the efficacy and fairness of such systems, and thus accountability and transparency are very important.

As stated in Slate, “The Intercept published a set of documents from a two-day event in July hosted by the U.S. Immigration and Customs Enforcement’s Homeland Security Investigations division, where tech companies were invited to learn more about the kind of software ICE is looking to procure for its new ‘Extreme Vetting Initiative.’ According to the documents, ICE is in the market for a tool that it can use to predict the potential criminality of people who come into the country.” Further information on the Slate article is available here.

The AI community should help investigate algorithmic accountability and transparency in the case of predictive policing and the subsequent application of the algorithms to new areas. We should then discuss our SIGAI position and public policy.

Algorithms and Algorithmic Transparency

Our July 15th post summarized the USACM-EUACM joint statement on Algorithmic Transparency and Accountability (ATA) and introduced the ATA FAQ project by the USACM Algorithms Working Group. Their goal is “to take the lead addressing the technical aspects of algorithms and to have content prepared for media inquiries and policymakers.” The SIGAI has been asked to contribute expertise in developing content for the FAQ. Please comment to this posting so we can collect and share insights with USACM. You can also send your ideas and suggestion directly to Cynthia Florentino, ACM Policy Analyst, at cflorentino@acm.org.

The focus of this post is the discussion of “algorithms” in the FAQ. Your feedback will be appreciated. Some of the input we received is as follows:
“Q: What is an algorithm?
A: An algorithm is a set of well-defined steps that leads from inputs (data) to outputs (results). Today, algorithms are used in decision-making in education, access to credit, employment, and in the criminal justice system.  An algorithm can be compared to a recipe that runs in the same way each time, automatically using the given input data. The input data is combined and placed through the same set of steps, and the output is dependent on the input data and the set of steps that comprise the algorithm.”
and
“Q: Can algorithms be explained? Why or why not?  What are the challenges?
A: It is not always possible to interpret machine learning and algorithmic models. This is because a model may use an enormous volume of data in the process of figuring out the ideal approach. This in turn, makes it hard to go back and trace how the algorithm arrived at a certain decision.”

This post raises an issue with the use of the term “algorithm” in the era of Big Data in which the term “machine learning” has been incorporated into the field of data analytics and data science. The AI community needs, in the case of the ATA issues, to give careful attention to definitions and concepts that enables a clear discourse on ATA policy.

A case in point, and we welcome input of SIGAI, is the central role of artificial neural networks (NN) in machine learning and deep learning. In what sense is a NN algorithmic? Toward the goal of algorithmic transparency, what needs to be explained about how a NN works? From a policy perspective, what are the challenges in addressing the transparency of a NN component of machine learning frameworks with audiences of varying technical backgrounds?

The mechanisms for training neural networks are algorithmic in the traditional sense of the word by using a series of steps repeatedly in the adjustment of parameters such as in multilayer perceptron learning. The algorithms in NN training methods operate the same way for all specific applications in which input data is mapped to output results. Only a high-level discussion and use of simplified diagrams are practical for “explaining” these NN algorithms to policymakers and end users of systems involving machine learning.

On the other hand, the design and implementation of applications involving NN-based machine learning are surely the real points of concern for issues of “algorithmic transparency”. In that regard, the “explanation” of a particular application could discuss the careful description of a problem to be solved and the NN design model chosen to solve the problem. Further, (for now) human choices are made about the number and types of input items and the numbers of nodes and layers, method for cleaning and normalizing input data, choice of an appropriate error measure and number of training cycles, appropriate procedure for independent testing, and the interpretation of results with realistic uncertainty estimates. The application development procedure is algorithmic in a general sense, but the more important point is that assumptions and biases are involved in the design and implementation of the NN. The choice of data, and its relevance and quality, are eminently important in understanding the validity of a system involving machine learning. Thus, the transparency of NN algorithms, in the technical sense, might well be explained, but the transparency and biases of the model and implementation process are the aspects with serious policy consequences.

We welcome your feedback!

USACM ATA FAQ

In the SIGAI June blog posts, we covered the USACM-EUACM joint statement on Algorithmic Transparency and Accountability (ATA). This topic is being actively discussed online and in public presentations. An interesting development is an FAQ project by the USACM Algorithms Working Group, which aims “to take the lead addressing the technical aspects of algorithms and to have content prepared for media inquiries and policymakers.” The FAQ could also help raise the profile of USACM’s work if stakeholders look to it for answers on the technical underpinnings of algorithms. The questions build on issues raised in the USACM-EUACM joint statement on ATA. The briefing materials will also support a forthcoming USACM policy event.

The FAQ is interesting in its own right, and an AI Matters blog discussion could be helpful to USACM and the ongoing evolution of the ATA issue. Please make Comment to this posting so we can collect and share your input with USACM. You can also send your ideas and suggestions directly with Cynthia Florentino, ACM Policy Analyst, at cflorentino@acm.org.

Below are the questions being discussed. The USACM Working Group will appreciate the input from SIGAI. I hope you enjoy thinking about these questions and the ideas around the issue of algorithmic transparency and accountability.

Current Questions in the DRAFT Working Document
Frequently Asked Questions
USACM Statement on Algorithmic Transparency and Accountability

Q: What is an algorithm?

Q: Can algorithms be explained? Why or why not? ? Why or why not? What are the challenges?

Q: What are the technical challenges associated with data inputs to an algorithm?

Q: What are machine learning models?

Q: What are neural networks?

Q: What are decision trees?

Q: How can we introduce checks and balances into the development and operation of software to make it impartial?

Q: When trying to introduce checks and balances, what is the impact of AI algorithms that are unable to export an explanation of their decision

Q:What lies ahead for algorithms?

Q: Who is the intended audience?

Q: Are these principles just for the US, or are they intended to applied world-wide?

Q: Are these principles for government or corporations to follow?

Q: Where did you get the idea for this project?

Q: What kind of decisions are being made by computers today?

Q: Can you give examples of biased decisions made by computer?

Q: Why is there resistance to explaining the decisions made by computer

Q: Who is responsible for biased decisions made with input from a machine learning algorithm?

Q: What are sources of bias in algorithmic decision making?

Q: What are some examples of the data sets used to train machine learning algorithms that contain bias?

Q: Human decision makers can be biased as well. Are decisions made by computers more or less biased?

Q: Can algorithms be biased even if they do not look at protected characteristics like race, gender, disability status, etc?

Q: What are some examples of proprietary algorithms being used to make decisions of public interest?

Q: Are there other sets of principles in this area?

Q: Are there other organizations is working in this area?

Q: Are there any academic courses in this area?

*********

Your suggestions will be collected and sent to the USACM Algorithms Working Group, and  you can share your input directly with Cynthia Florentino, ACM Policy Analyst

China Matters

In a recent post, AI Matters welcomed ACM SIGAI China and its members as a chapter of ACM SIGAI.  Prof. Le Dong, University of Electronic Science and Technology of China, is the Chair of SIGAI China. The AI Matters policy blog will be exploring areas of common interest in AI policy and issues for discussions in future postings.

As their first event, ACM SIGAI China held the Symposium on New Challenges and Opportunities in the Post-Turing AI Era in May, 2017, as part of the ACM Turing 50th Celebration Conference in Shanghai. Keynote presentations addressed the challenges of bringing robotic and other AI technologies into practice, including a keynote by our own Prof. Sven Koenig on timely decision making by robots and other agents in their environments.

The Symposium included workshops that particularly relate to policy issues. The Career of the Young in the Emerging Field featured rising new scientists discussing the human responsibilities and challenges that accompany the many career opportunities in AI. The Gold-Rush Again to Western China: When ACM Meets B&R workshop focused on the Belt and Road Initiative for a Trans-Eurasia, across-ocean economic strategy and the related opportunities for computer science. The IoT and Cyberspace Security workshop explored opportunities and issues in areas of vehicular sensor networks, traffic management, intelligent and green transportation, and collection of data on people and things for operating the urban infrastructure.

We look forward to interactions with our colleagues in the ACM SIGAI China as we explore policy issues along with discussing cutting-edge research in artificial intelligence.

Algorithmic Accountability

The previous SIGAI public policy post covered the USACM-EUACM joint statement on Algorithmic Transparency and Accountability. Several interesting developments and opportunities are available for SIGAI members to discuss related topics. In particular, individuals and groups are calling for measures to provide independent oversight that might mitigate the dangers of biased, faulty, and malicious algorithms. Transparency is important for data systems and algorithms that guide life-critical systems such as healthcare, air traffic control, and nuclear control rooms. Ben Shneiderman’s Turing lecture is highly recommended on this point: https://www.youtube.com/watch?v=UWuDgY8aHmU

A robust discussion on the SIGAI Public Policy blog would be great for exploring ideas on oversight measures. Additionally, we should weigh in on some fundamental questions such as those raised by Ed Felton in his recent article “What does it mean to ask for an ‘explainable’ algorithm?” He sets up an excellent framework for the discussion, and the comments about his article raise differing points of view we should consider.

Felton says that “one of the standard critiques of using algorithms for decision-making about people, and especially for consequential decisions about access to housing, credit, education, and so on, is that the algorithms don’t provide an ‘explanation’ for their results or the results aren’t ‘interpretable.’  This is a serious issue, but discussions of it are often frustrating. The reason, I think, is that different people mean different things when they ask for an explanation of an algorithm’s results”.  Felton discusses four types of explainabilty:
1.  A claim of confidentiality (institutional/legal). Someone withholds relevant information about how a decision is made.
2.  Complexity (barrier to big picture understanding). Details about the algorithm are difficult to explain, but the impact of the results on a person can still be understood.
3.  Unreasonableness (results don’t make sense). The workings of the algorithm are clear, and are justified by statistical evidence, but the nature of how our world functions isn’t clear.
4.  Injustice (justification for designing the algorithm). Using the algorithm is unfair, unjust, or morally wrong.

In addition, SIGAI should provide input on the nature of AI systems and what it means to “explain” how decision-making AI technologies work – for example, the role of algorithms in supervised and unsupervised systems versus the choices of data and design options in creating an operational system.

Your comments are welcome. Also, please share what work you may be doing in the area of algorithmic transparency.

Algorithmic Transparency and Accountability

Algorithms in AI and data science software are having increasing impacts on individuals and society. Along with the many benefits of intelligent systems, potential harmful bias needs to be addressed. A USACM-EUACM joint statement was released on May 25, 2017, and can be found at http://www.acm.org/binaries/content/assets/publicpolicy/2017_joint_statement_algorithms.pdf. See the ACM Technology Blog for discussion of the statement. The ACM US Public Policy Council approved the principles earlier this year.

In a message to USACM members, ACM Director of Public Policy Renee Dopplick, said, “EUACM has endorsed the Statement on Algorithmic Transparency and Accountability. Furthering its impacts, we are re-releasing it as a joint statement with a related media release. The USACM-EUACM Joint Statement demonstrates and affirms shared support for these principles to help minimize the potential for harm in algorithmic decision making and thus strengthens our ability to further expand our policy and media impacts.”

The joint statement aims to present the technical challenges and opportunities to prevent and mitigate potential harmful bias. The set of principles, consistent with the ACM Code of Ethics, is included in the statement and is intended to support the benefits of algorithmic decision-making while addressing these concerns.

The Principles for Algorithmic Transparency and Accountability from the joint statement are as follows:

  1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.
  6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.

We welcome your comments in the AI Matters blog and the ACM Technology Blog.

USACM

As your public policy officer, I have joined the USACM.  My goals are to introduce AI matters into USACM discussions and to relay AI-related ideas and issues from USACM to SIGAI members through blog postings.
Here is some information about USACM:

Mission
The U.S. Public Policy Council of ACM (USACM) is chartered as the focal point for ACM’s interaction with U.S. government organizations, the computing community, and the U.S. public in all matters of U.S. public policy related to information technology and computing — except issues in science and math education relevant to computing and computer science, which is the responsibility of the Educational Policy Committee (EPC). The USACM Council superseded the former ACM U.S. Public Policy standing committee.

The USACM is authorized to take official policy positions.  These positions reflect the position of the USACM and not necessarily that of ACM. Policy positions of USACM are decided by a majority vote of the USACM Executive Committee.

Committees
Currently, USACM has the following seven standing committees listed below (with chairs):
USACM-Accessibility  Harry Hochheiser (Accessibility & usability)
USACM-DigiGov          Chris Bronk          (Digital governance)
USACM-IP                   Paul Hyland          (Intellectual property)
USACM-Law                Andy Grosso         (IT & Law)
USACM-Security         Alec Yasinsac        (Security)
USACM-Privacy          Brian Dean            (Privacy)
USACM-Voting           Barbara Simons    (Voting-related computing issues)

Working Groups
Internet of Things (USACM-IOT)
Algorithmic Accountability (USACM-Algorithms)
Big Data (USACM-Data)

Please find more information about USACM at http://usacm.acm.org/
and the brochure at
http://usacm.acm.org/images/documents/USACMBrochure.pdf

Policy Issues for AI Discussion

Today’s blog post seeks to focus on, and initiate a discussion about, the current administration’s positions on AI R&D support and public policies. We would like to know SIGAI members’ views on the important areas of concern for AI-related policies.

In December 2016, the Obama administration released a report on Artificial Intelligence, Automation, and the Economy. This report followed the Administration’s previous report, Preparing for the Future of Artificial Intelligence, which recommended that the White House publish a report on the economic impacts of artificial intelligence by the end of 2016. The reports addressed readiness of the United States for a future in which artificial intelligence plays a growing role. The Obama Administration’s views are described in the Roadmap for AI Policy by Ajay Agrawal, Joshua Gans, and Avi Goldfarb in the December 21, 2016, Harvard Business Review. Some reference points from outside the US are Artificial intelligence: an overview for policy-makers from the U.K. and China’s planning for AI.

Miles Brundage and Joanna Bryson argued in August 2016 (see Smart Policies for Artificial Intelligence) that a de facto artificial intelligence policy already exists: “a patchwork of policies impacting the field of AI’s development in myriad ways. The key question related to AI policy, then, is not whether AI should be governed at all, but how it is currently being governed, and how that governance might become more informed, integrated, effective, and anticipatory.”

Some potential implications of AI for society involve the speed of change due to advances in AI technology; loss of individual control and privacy; job destruction due to automation; and the need for laws and public policy on AI technology’s role in the transformation of society. An important point is that, compared to the industrial revolution, AI’s impact is happening much faster and at a much larger scale of use than past technological advances. Organizations need to recognize the likelihood of disruption of operations that will happen whether or not change is intentional and planned.

In our current environment, we need to examine the extent of the new administration’s understanding of AI technology and the need for policies, laws, and planning. So far, not much information is available — from specifics about who will be the head of the National Highway Traffic Safety Agency (NHTSA), the main federal agency that regulates car safety, to the administration’s view of time scales. For example, the administration may take the position that AI will not cause job losses for many decades, which view could distort assumptions about labor market trends and lead to policy mistakes. These views on the future of AI could impact policies that promote programs to promote entrepreneurship and job creation. A few days ago an executive order established the American Technology Council with an initial focus on information technology. The status of the White House Office of Science and Technology Policy is not available on the OSTP Website. AI technology and applications will continue to grow rapidly, but whether or not public policy will keep pace is in doubt.

Please share your ideas via comments to this post and email messages to aimatters@sigai.acm.org.

Advocating for Science Beyond the March

Be a Force for Science: Advocating for Science Beyond the March
Wednesday, April 19, 2017 2:00 p.m. – 3:00 p.m. ET

Register Here  for the free AAAS webinar to learn about practical, concrete steps you can take to be a science advocate locally, nationally and internationally. The panel of communications and advocacy experts will share best practices on outreach topics, including:
• How to communicate the importance of evidence-based decision making    to policymakers.
• How to work with the media.
• How to share the value of science and its impact with the public.

AAAS will also unveil an online advocacy toolkit.

Panelists:
Erika Shugart
Executive Director
American Society for Cell Biology

Francis Slakey
Interim Director of Public Affairs
American Physical Society

Suzanne Ffolkes
Vice President of Communications
Research!America

Moderator: Erin Heath
Associate Director, Office of Government Relations
AAAS