News from ACM SIGAI

We welcome ACM SIGAI China and its members to ACM SIGAI! ACM SIGAI China held its first event, the ACM SIGAI China Symposium on New Challenges and Opportunities in the Post-Turing AI Era, as part of the ACM Turing 50th Celebration Conference on May 12-14, 2017 in Shanghai. We will report details in an upcoming edition of AI Matters.

The winner of the ACM Prize in Computing is Alexei Efros from the University of California at Berkeley for his work on machine learning in computer vision and computer graphics. The award will be presented at the annual ACM Awards Banquet on June 24, 2017 in San Francisco.

We hope that you enjoyed the ACM Learning Webinar with Tom Mitchell on June 15, 2017 on “Using Machine Learning to Study Neural Representations of Language Meaning”. If you missed it, it is now available on “On Demand.”

The “50 Years of the ACM Turing Award” Celebration will be held on June 23 and 24, 2017 in San Francisco. The ACM SIGAI recipients of the ACM Turing Scholarship to attend this high-profile meeting are Tim Lee from Carnegie Mellon University and Justin Svegliato from the University of Massachusetts at Amherst.

ACM SIGAI now has a 3-month membership requirement before students who join ACM SIGAI can apply for financial benefits from ACM SIGAI, such as fellowships and travel support. Please help us with letting all students know about this new requirement to avoid any disappointments.

Algorithmic Accountability

The previous SIGAI public policy post covered the USACM-EUACM joint statement on Algorithmic Transparency and Accountability. Several interesting developments and opportunities are available for SIGAI members to discuss related topics. In particular, individuals and groups are calling for measures to provide independent oversight that might mitigate the dangers of biased, faulty, and malicious algorithms. Transparency is important for data systems and algorithms that guide life-critical systems such as healthcare, air traffic control, and nuclear control rooms. Ben Shneiderman’s Turing lecture is highly recommended on this point: https://www.youtube.com/watch?v=UWuDgY8aHmU

A robust discussion on the SIGAI Public Policy blog would be great for exploring ideas on oversight measures. Additionally, we should weigh in on some fundamental questions such as those raised by Ed Felton in his recent article “What does it mean to ask for an ‘explainable’ algorithm?” He sets up an excellent framework for the discussion, and the comments about his article raise differing points of view we should consider.

Felton says that “one of the standard critiques of using algorithms for decision-making about people, and especially for consequential decisions about access to housing, credit, education, and so on, is that the algorithms don’t provide an ‘explanation’ for their results or the results aren’t ‘interpretable.’  This is a serious issue, but discussions of it are often frustrating. The reason, I think, is that different people mean different things when they ask for an explanation of an algorithm’s results”.  Felton discusses four types of explainabilty:
1.  A claim of confidentiality (institutional/legal). Someone withholds relevant information about how a decision is made.
2.  Complexity (barrier to big picture understanding). Details about the algorithm are difficult to explain, but the impact of the results on a person can still be understood.
3.  Unreasonableness (results don’t make sense). The workings of the algorithm are clear, and are justified by statistical evidence, but the nature of how our world functions isn’t clear.
4.  Injustice (justification for designing the algorithm). Using the algorithm is unfair, unjust, or morally wrong.

In addition, SIGAI should provide input on the nature of AI systems and what it means to “explain” how decision-making AI technologies work – for example, the role of algorithms in supervised and unsupervised systems versus the choices of data and design options in creating an operational system.

Your comments are welcome. Also, please share what work you may be doing in the area of algorithmic transparency.

Algorithmic Transparency and Accountability

Algorithms in AI and data science software are having increasing impacts on individuals and society. Along with the many benefits of intelligent systems, potential harmful bias needs to be addressed. A USACM-EUACM joint statement was released on May 25, 2017, and can be found at http://www.acm.org/binaries/content/assets/publicpolicy/2017_joint_statement_algorithms.pdf. See the ACM Technology Blog for discussion of the statement. The ACM US Public Policy Council approved the principles earlier this year.

In a message to USACM members, ACM Director of Public Policy Renee Dopplick, said, “EUACM has endorsed the Statement on Algorithmic Transparency and Accountability. Furthering its impacts, we are re-releasing it as a joint statement with a related media release. The USACM-EUACM Joint Statement demonstrates and affirms shared support for these principles to help minimize the potential for harm in algorithmic decision making and thus strengthens our ability to further expand our policy and media impacts.”

The joint statement aims to present the technical challenges and opportunities to prevent and mitigate potential harmful bias. The set of principles, consistent with the ACM Code of Ethics, is included in the statement and is intended to support the benefits of algorithmic decision-making while addressing these concerns.

The Principles for Algorithmic Transparency and Accountability from the joint statement are as follows:

  1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.
  6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.

We welcome your comments in the AI Matters blog and the ACM Technology Blog.