Algorithms and Algorithmic Transparency

Our July 15th post summarized the USACM-EUACM joint statement on Algorithmic Transparency and Accountability (ATA) and introduced the ATA FAQ project by the USACM Algorithms Working Group. Their goal is “to take the lead addressing the technical aspects of algorithms and to have content prepared for media inquiries and policymakers.” The SIGAI has been asked to contribute expertise in developing content for the FAQ. Please comment to this posting so we can collect and share insights with USACM. You can also send your ideas and suggestion directly to Cynthia Florentino, ACM Policy Analyst, at cflorentino@acm.org.

The focus of this post is the discussion of “algorithms” in the FAQ. Your feedback will be appreciated. Some of the input we received is as follows:
“Q: What is an algorithm?
A: An algorithm is a set of well-defined steps that leads from inputs (data) to outputs (results). Today, algorithms are used in decision-making in education, access to credit, employment, and in the criminal justice system.  An algorithm can be compared to a recipe that runs in the same way each time, automatically using the given input data. The input data is combined and placed through the same set of steps, and the output is dependent on the input data and the set of steps that comprise the algorithm.”
and
“Q: Can algorithms be explained? Why or why not?  What are the challenges?
A: It is not always possible to interpret machine learning and algorithmic models. This is because a model may use an enormous volume of data in the process of figuring out the ideal approach. This in turn, makes it hard to go back and trace how the algorithm arrived at a certain decision.”

This post raises an issue with the use of the term “algorithm” in the era of Big Data in which the term “machine learning” has been incorporated into the field of data analytics and data science. The AI community needs, in the case of the ATA issues, to give careful attention to definitions and concepts that enables a clear discourse on ATA policy.

A case in point, and we welcome input of SIGAI, is the central role of artificial neural networks (NN) in machine learning and deep learning. In what sense is a NN algorithmic? Toward the goal of algorithmic transparency, what needs to be explained about how a NN works? From a policy perspective, what are the challenges in addressing the transparency of a NN component of machine learning frameworks with audiences of varying technical backgrounds?

The mechanisms for training neural networks are algorithmic in the traditional sense of the word by using a series of steps repeatedly in the adjustment of parameters such as in multilayer perceptron learning. The algorithms in NN training methods operate the same way for all specific applications in which input data is mapped to output results. Only a high-level discussion and use of simplified diagrams are practical for “explaining” these NN algorithms to policymakers and end users of systems involving machine learning.

On the other hand, the design and implementation of applications involving NN-based machine learning are surely the real points of concern for issues of “algorithmic transparency”. In that regard, the “explanation” of a particular application could discuss the careful description of a problem to be solved and the NN design model chosen to solve the problem. Further, (for now) human choices are made about the number and types of input items and the numbers of nodes and layers, method for cleaning and normalizing input data, choice of an appropriate error measure and number of training cycles, appropriate procedure for independent testing, and the interpretation of results with realistic uncertainty estimates. The application development procedure is algorithmic in a general sense, but the more important point is that assumptions and biases are involved in the design and implementation of the NN. The choice of data, and its relevance and quality, are eminently important in understanding the validity of a system involving machine learning. Thus, the transparency of NN algorithms, in the technical sense, might well be explained, but the transparency and biases of the model and implementation process are the aspects with serious policy consequences.

We welcome your feedback!

USACM ATA FAQ

In the SIGAI June blog posts, we covered the USACM-EUACM joint statement on Algorithmic Transparency and Accountability (ATA). This topic is being actively discussed online and in public presentations. An interesting development is an FAQ project by the USACM Algorithms Working Group, which aims “to take the lead addressing the technical aspects of algorithms and to have content prepared for media inquiries and policymakers.” The FAQ could also help raise the profile of USACM’s work if stakeholders look to it for answers on the technical underpinnings of algorithms. The questions build on issues raised in the USACM-EUACM joint statement on ATA. The briefing materials will also support a forthcoming USACM policy event.

The FAQ is interesting in its own right, and an AI Matters blog discussion could be helpful to USACM and the ongoing evolution of the ATA issue. Please make Comment to this posting so we can collect and share your input with USACM. You can also send your ideas and suggestions directly with Cynthia Florentino, ACM Policy Analyst, at cflorentino@acm.org.

Below are the questions being discussed. The USACM Working Group will appreciate the input from SIGAI. I hope you enjoy thinking about these questions and the ideas around the issue of algorithmic transparency and accountability.

Current Questions in the DRAFT Working Document
Frequently Asked Questions
USACM Statement on Algorithmic Transparency and Accountability

Q: What is an algorithm?

Q: Can algorithms be explained? Why or why not? ? Why or why not? What are the challenges?

Q: What are the technical challenges associated with data inputs to an algorithm?

Q: What are machine learning models?

Q: What are neural networks?

Q: What are decision trees?

Q: How can we introduce checks and balances into the development and operation of software to make it impartial?

Q: When trying to introduce checks and balances, what is the impact of AI algorithms that are unable to export an explanation of their decision

Q:What lies ahead for algorithms?

Q: Who is the intended audience?

Q: Are these principles just for the US, or are they intended to applied world-wide?

Q: Are these principles for government or corporations to follow?

Q: Where did you get the idea for this project?

Q: What kind of decisions are being made by computers today?

Q: Can you give examples of biased decisions made by computer?

Q: Why is there resistance to explaining the decisions made by computer

Q: Who is responsible for biased decisions made with input from a machine learning algorithm?

Q: What are sources of bias in algorithmic decision making?

Q: What are some examples of the data sets used to train machine learning algorithms that contain bias?

Q: Human decision makers can be biased as well. Are decisions made by computers more or less biased?

Q: Can algorithms be biased even if they do not look at protected characteristics like race, gender, disability status, etc?

Q: What are some examples of proprietary algorithms being used to make decisions of public interest?

Q: Are there other sets of principles in this area?

Q: Are there other organizations is working in this area?

Q: Are there any academic courses in this area?

*********

Your suggestions will be collected and sent to the USACM Algorithms Working Group, and  you can share your input directly with Cynthia Florentino, ACM Policy Analyst

Winners of the ACM SIGAI Student Essay Contest on the Responsible Use of AI Technologies

All the submissions have been reviewed, and we are happy to announce the winners of the ACM SIGAI Student Essay Contest on the Responsible Use of AI Technologies. The winning essays argue, convincingly, why the proposed issues are pressing (that is, of current concern), why the issues concern AI technology, and what position or steps governments, industries or organizations (including ACM SIGAI) can take to address the issues or shape the discussion on them. These essays have been selected based on depth of insight, creativity, technical merit and novelty of argument.

The winners (in alphabetical order) are:

  • Jack Bandy, Automation Moderation: Finding symbiosis with anti-human technology
  • Joseph Blass. You, Me, or Us: Balancing Individuals’ and Societies’ Moral Needs and Desires in Autonomous Systems
  • Lukas Prediger, On Monitoring and Directing Progress in AI
  • Matthew Rahtz, Truth in the ‘Killer Robots’ Angle
  • Grace Su, Unemployment in the AI Age
  • Ilse Verdiesen, How do we ensure that we remain in control of our Autonomous Weapons?
  • Christian Wagner, Sexbots: The Ethical Ramifications of Social Robotics’ Dark Side
  • Dennis Wilson, The Ethics of Big Data and Psychographics

All winning essays will be published in the ACM SIGAI newsletter “AI Matters.” ACM SIGAI provides five monetary awards of USD 500 each as well as 45-minute skype sessions with the following AI researchers:

  • Murray Campbell, Senior Manager, IBM Thomas J. Watson Research Center
  • Eric Horvitz, Managing Director, Microsoft Research
  • Peter Norvig, Director of Research, Google
  • Stuart Russell, Professor, University of California at Berkeley
  • Michael Wooldridge, Head of the Department of Computer Science, University of Oxford

Special thanks are in order to our panel of expert reviewers. Each essay was read and scored by three or more of the following AI experts:

  • Sanmay Das, Washington University in St. Louis
  • Judy Goldsmith, University of Kentucky
  • H. V. Jagadish, University of Michigan
  • Albert Jiang, Trinity University
  • Sven Koenig, University of Southern California
  • Benjamin Kuipers, University of Michigan
  • Nicholas Mattei, IBM Research
  • Alexandra Olteanu, IBM Research
  • Rosemary Paradis, Lockheed Martin
  • Francesca Rossi, IBM Research

We hope to run this contest again with a new topic in the future!

— Nicholas Mattei, IBM Research

China Matters

In a recent post, AI Matters welcomed ACM SIGAI China and its members as a chapter of ACM SIGAI.  Prof. Le Dong, University of Electronic Science and Technology of China, is the Chair of SIGAI China. The AI Matters policy blog will be exploring areas of common interest in AI policy and issues for discussions in future postings.

As their first event, ACM SIGAI China held the Symposium on New Challenges and Opportunities in the Post-Turing AI Era in May, 2017, as part of the ACM Turing 50th Celebration Conference in Shanghai. Keynote presentations addressed the challenges of bringing robotic and other AI technologies into practice, including a keynote by our own Prof. Sven Koenig on timely decision making by robots and other agents in their environments.

The Symposium included workshops that particularly relate to policy issues. The Career of the Young in the Emerging Field featured rising new scientists discussing the human responsibilities and challenges that accompany the many career opportunities in AI. The Gold-Rush Again to Western China: When ACM Meets B&R workshop focused on the Belt and Road Initiative for a Trans-Eurasia, across-ocean economic strategy and the related opportunities for computer science. The IoT and Cyberspace Security workshop explored opportunities and issues in areas of vehicular sensor networks, traffic management, intelligent and green transportation, and collection of data on people and things for operating the urban infrastructure.

We look forward to interactions with our colleagues in the ACM SIGAI China as we explore policy issues along with discussing cutting-edge research in artificial intelligence.

News from ACM SIGAI

We welcome ACM SIGAI China and its members to ACM SIGAI! ACM SIGAI China held its first event, the ACM SIGAI China Symposium on New Challenges and Opportunities in the Post-Turing AI Era, as part of the ACM Turing 50th Celebration Conference on May 12-14, 2017 in Shanghai. We will report details in an upcoming edition of AI Matters.

The winner of the ACM Prize in Computing is Alexei Efros from the University of California at Berkeley for his work on machine learning in computer vision and computer graphics. The award will be presented at the annual ACM Awards Banquet on June 24, 2017 in San Francisco.

We hope that you enjoyed the ACM Learning Webinar with Tom Mitchell on June 15, 2017 on “Using Machine Learning to Study Neural Representations of Language Meaning”. If you missed it, it is now available on “On Demand.”

The “50 Years of the ACM Turing Award” Celebration will be held on June 23 and 24, 2017 in San Francisco. The ACM SIGAI recipients of the ACM Turing Scholarship to attend this high-profile meeting are Tim Lee from Carnegie Mellon University and Justin Svegliato from the University of Massachusetts at Amherst.

ACM SIGAI now has a 3-month membership requirement before students who join ACM SIGAI can apply for financial benefits from ACM SIGAI, such as fellowships and travel support. Please help us with letting all students know about this new requirement to avoid any disappointments.

Algorithmic Accountability

The previous SIGAI public policy post covered the USACM-EUACM joint statement on Algorithmic Transparency and Accountability. Several interesting developments and opportunities are available for SIGAI members to discuss related topics. In particular, individuals and groups are calling for measures to provide independent oversight that might mitigate the dangers of biased, faulty, and malicious algorithms. Transparency is important for data systems and algorithms that guide life-critical systems such as healthcare, air traffic control, and nuclear control rooms. Ben Shneiderman’s Turing lecture is highly recommended on this point: https://www.youtube.com/watch?v=UWuDgY8aHmU

A robust discussion on the SIGAI Public Policy blog would be great for exploring ideas on oversight measures. Additionally, we should weigh in on some fundamental questions such as those raised by Ed Felton in his recent article “What does it mean to ask for an ‘explainable’ algorithm?” He sets up an excellent framework for the discussion, and the comments about his article raise differing points of view we should consider.

Felton says that “one of the standard critiques of using algorithms for decision-making about people, and especially for consequential decisions about access to housing, credit, education, and so on, is that the algorithms don’t provide an ‘explanation’ for their results or the results aren’t ‘interpretable.’  This is a serious issue, but discussions of it are often frustrating. The reason, I think, is that different people mean different things when they ask for an explanation of an algorithm’s results”.  Felton discusses four types of explainabilty:
1.  A claim of confidentiality (institutional/legal). Someone withholds relevant information about how a decision is made.
2.  Complexity (barrier to big picture understanding). Details about the algorithm are difficult to explain, but the impact of the results on a person can still be understood.
3.  Unreasonableness (results don’t make sense). The workings of the algorithm are clear, and are justified by statistical evidence, but the nature of how our world functions isn’t clear.
4.  Injustice (justification for designing the algorithm). Using the algorithm is unfair, unjust, or morally wrong.

In addition, SIGAI should provide input on the nature of AI systems and what it means to “explain” how decision-making AI technologies work – for example, the role of algorithms in supervised and unsupervised systems versus the choices of data and design options in creating an operational system.

Your comments are welcome. Also, please share what work you may be doing in the area of algorithmic transparency.

Algorithmic Transparency and Accountability

Algorithms in AI and data science software are having increasing impacts on individuals and society. Along with the many benefits of intelligent systems, potential harmful bias needs to be addressed. A USACM-EUACM joint statement was released on May 25, 2017, and can be found at http://www.acm.org/binaries/content/assets/publicpolicy/2017_joint_statement_algorithms.pdf. See the ACM Technology Blog for discussion of the statement. The ACM US Public Policy Council approved the principles earlier this year.

In a message to USACM members, ACM Director of Public Policy Renee Dopplick, said, “EUACM has endorsed the Statement on Algorithmic Transparency and Accountability. Furthering its impacts, we are re-releasing it as a joint statement with a related media release. The USACM-EUACM Joint Statement demonstrates and affirms shared support for these principles to help minimize the potential for harm in algorithmic decision making and thus strengthens our ability to further expand our policy and media impacts.”

The joint statement aims to present the technical challenges and opportunities to prevent and mitigate potential harmful bias. The set of principles, consistent with the ACM Code of Ethics, is included in the statement and is intended to support the benefits of algorithmic decision-making while addressing these concerns.

The Principles for Algorithmic Transparency and Accountability from the joint statement are as follows:

  1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
  2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
  3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
  4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
  5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.
  6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
  7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.

We welcome your comments in the AI Matters blog and the ACM Technology Blog.

USACM

As your public policy officer, I have joined the USACM.  My goals are to introduce AI matters into USACM discussions and to relay AI-related ideas and issues from USACM to SIGAI members through blog postings.
Here is some information about USACM:

Mission
The U.S. Public Policy Council of ACM (USACM) is chartered as the focal point for ACM’s interaction with U.S. government organizations, the computing community, and the U.S. public in all matters of U.S. public policy related to information technology and computing — except issues in science and math education relevant to computing and computer science, which is the responsibility of the Educational Policy Committee (EPC). The USACM Council superseded the former ACM U.S. Public Policy standing committee.

The USACM is authorized to take official policy positions.  These positions reflect the position of the USACM and not necessarily that of ACM. Policy positions of USACM are decided by a majority vote of the USACM Executive Committee.

Committees
Currently, USACM has the following seven standing committees listed below (with chairs):
USACM-Accessibility  Harry Hochheiser (Accessibility & usability)
USACM-DigiGov          Chris Bronk          (Digital governance)
USACM-IP                   Paul Hyland          (Intellectual property)
USACM-Law                Andy Grosso         (IT & Law)
USACM-Security         Alec Yasinsac        (Security)
USACM-Privacy          Brian Dean            (Privacy)
USACM-Voting           Barbara Simons    (Voting-related computing issues)

Working Groups
Internet of Things (USACM-IOT)
Algorithmic Accountability (USACM-Algorithms)
Big Data (USACM-Data)

Please find more information about USACM at http://usacm.acm.org/
and the brochure at
http://usacm.acm.org/images/documents/USACMBrochure.pdf

AI Matters Interview with Peter Stone

Welcome!  This column is the third in our series profiling senior AI researchers. This month focuses on Peter Stone, a Professor at the University of Texas Austin and the COO and co-founder of Cogitai, Inc.

Peter Stone’s Bio

Peter Stone

Dr. Peter Stone is the David Bruton, Jr. Centennial Professor and Associate Chair of Computer Science, as well as Chair of the Robotics Portfolio Program, at the University of Texas at Austin. In 2013 he was awarded the University of Texas System Regents’ Outstanding Teaching Award and in 2014 he was inducted into the UT Austin Academy of Distinguished Teachers, earning him the title of University Distinguished Teaching Professor. Professor Stone’s research interests in Artificial Intelligence include machine learning (especially reinforcement learning), multiagent systems, robotics, and e-commerce. Professor Stone received his Ph.D in Computer Science in 1998 from Carnegie Mellon University. From 1999 to 2002 he was a Senior Technical Staff Member in the Artificial Intelligence Principles Research Department at AT&T Labs – Research. He is an Alfred P. Sloan Research Fellow, Guggenheim Fellow, AAAI Fellow, Fulbright Scholar, and 2004 ONR Young Investigator. In 2003, he won an NSF CAREER award for his proposed long term research on learning agents in dynamic, collaborative, and adversarial multiagent environments, in 2007 he received the prestigious IJCAI Computers and Thought Award, given biannually to the top AI researcher under the age of 35, and in 2016 he was awarded the ACM/SIGAI Autonomous Agents Research Award.

How did you become interested in AI?

The first I remember becoming interested in AI was on a field trip to the University of Buffalo when I was in Middle School or early High School (I don’t remember which).  The students rotated through a number of science labs and one of the ones I ended up in was a computer science “lab.”  The thing that stands out in my mind is the professor showing us pictures of various shapes such as triangles and squares, pointing out how easy it was for us to distinguish them, but then asserting that nobody knew how to write a computer program to do so (to date myself, this must have been the mid ’80s).  I had already started programming computers, but this got me interested in the concept of modeling intelligence with computers.

What made you decide the time was right for an AI startup?

Reinforcement learning has been a relatively “niche” area of AI since I became interested in it my first year of graduate school.  But with recent advances, I became convinced that now was the time to move to the next level and work on problems that are only possible to attack in a commercial setting.

How did I become convinced?  For that, I owe the credit to Mark Ring, one of my co-founders at Cogitai.  He and I met at the first NIPS conference I attended back in the mid ’90s.  We’ve stayed in touch intermittently.  But then in the fall of 2014 he visited Austin and got in touch.  He pitched the idea to me of starting a company based on continual learning, and it just made sense.

What professional achievement are you most proud of?

I’m made proud over and over again by the achievements of my students and postdocs.  I’ve been very fortunate to work with a phenomenal group of individuals, both technically and personally.  Nothing makes me happier than seeing each succeed in his or her own way, and to think that I played some small role in it.

What do you wish you had known as a Ph.D. student or early researcher?

It’s cliche, but it’s true.  There’s no better time of life than when you’re a Ph.D. student.  You have the freedom to pursue one idea that you’re passionate about to the greatest possible, with very few other responsibilities.  You don’t have the status, appreciation, or salary that you deserve and that you’ll eventually inevitably get.  And yes, there are pressures.  But your job is to learn and to change the world in some small way.  I didn’t appreciate it when I was a student even though my advisor (Manuela Veloso) told me.  And I don’t expect my students to believe me when I tell them now.  But over time I hope they come to appreciate it as I have.  I loved my time as a Ph.D. student. But if I had known how many aspects of that time of life would be fleeting, I may have appreciated it even more.

What would you have chosen as your career if you hadn’t gone into AI?

I have no idea.  When I graduated from the University of Chicago as an undergrad, I applied to 4 CS Ph.D. programs, the Peace Corps, and Teach for America.  CMU was the only Ph.D. program that admitted me.  So I probably would have done the Peace Corps or Teach for America.  Who knows where that would have led me?

What is a “typical” day like for you?

I live a very full life.  Every day I spend as much time with my family as they’ll let me (teenagers….) and get some sort of exercise (usually either soccer, swimming, running, or biking).  I also play my violin about 3-4 times per week.  I schedule those things, and other aspects of my social life, and then work in all my “free” time.  That usually means catching up on email in the morning, attending meetings with students and colleagues either in person or by skype, reading articles, and editing students’ papers.  And I work late at night and on weekends when there’s no “fun” scheduled.  But really, there’s no “typical” day.  Some days I’m consumed with reading; others with proposal writing; others with negotiations with prospective employees; others with university politics; others with event organization; others with coming up with new ideas to burning problems.

I do a lot of multitasking, and I’m no better at it than anyone else. But I’m never bored.

How do you balance being involved in so many different aspects of the AI community?

I don’t know.  I have many interests and I can’t help but pursue them all.  And I multitask.

What is your favorite CS or AI-related movie or book and why?

Rather than a book, I’ll choose an author.  As a teenager, I read Isaac Asimov’s books voratiously – both his fiction (of course “I, Robot” made an impression, but the Foundation series was always my favorite), and his non-fiction.  He influenced my thoughts and imagination greatly.

Policy Issues for AI Discussion

Today’s blog post seeks to focus on, and initiate a discussion about, the current administration’s positions on AI R&D support and public policies. We would like to know SIGAI members’ views on the important areas of concern for AI-related policies.

In December 2016, the Obama administration released a report on Artificial Intelligence, Automation, and the Economy. This report followed the Administration’s previous report, Preparing for the Future of Artificial Intelligence, which recommended that the White House publish a report on the economic impacts of artificial intelligence by the end of 2016. The reports addressed readiness of the United States for a future in which artificial intelligence plays a growing role. The Obama Administration’s views are described in the Roadmap for AI Policy by Ajay Agrawal, Joshua Gans, and Avi Goldfarb in the December 21, 2016, Harvard Business Review. Some reference points from outside the US are Artificial intelligence: an overview for policy-makers from the U.K. and China’s planning for AI.

Miles Brundage and Joanna Bryson argued in August 2016 (see Smart Policies for Artificial Intelligence) that a de facto artificial intelligence policy already exists: “a patchwork of policies impacting the field of AI’s development in myriad ways. The key question related to AI policy, then, is not whether AI should be governed at all, but how it is currently being governed, and how that governance might become more informed, integrated, effective, and anticipatory.”

Some potential implications of AI for society involve the speed of change due to advances in AI technology; loss of individual control and privacy; job destruction due to automation; and the need for laws and public policy on AI technology’s role in the transformation of society. An important point is that, compared to the industrial revolution, AI’s impact is happening much faster and at a much larger scale of use than past technological advances. Organizations need to recognize the likelihood of disruption of operations that will happen whether or not change is intentional and planned.

In our current environment, we need to examine the extent of the new administration’s understanding of AI technology and the need for policies, laws, and planning. So far, not much information is available — from specifics about who will be the head of the National Highway Traffic Safety Agency (NHTSA), the main federal agency that regulates car safety, to the administration’s view of time scales. For example, the administration may take the position that AI will not cause job losses for many decades, which view could distort assumptions about labor market trends and lead to policy mistakes. These views on the future of AI could impact policies that promote programs to promote entrepreneurship and job creation. A few days ago an executive order established the American Technology Council with an initial focus on information technology. The status of the White House Office of Science and Technology Policy is not available on the OSTP Website. AI technology and applications will continue to grow rapidly, but whether or not public policy will keep pace is in doubt.

Please share your ideas via comments to this post and email messages to aimatters@sigai.acm.org.