The previous SIGAI public policy post covered the USACM-EUACM joint statement on Algorithmic Transparency and Accountability. Several interesting developments and opportunities are available for SIGAI members to discuss related topics. In particular, individuals and groups are calling for measures to provide independent oversight that might mitigate the dangers of biased, faulty, and malicious algorithms. Transparency is important for data systems and algorithms that guide life-critical systems such as healthcare, air traffic control, and nuclear control rooms. Ben Shneiderman’s Turing lecture is highly recommended on this point: https://www.youtube.com/watch?v=UWuDgY8aHmU
A robust discussion on the SIGAI Public Policy blog would be great for exploring ideas on oversight measures. Additionally, we should weigh in on some fundamental questions such as those raised by Ed Felton in his recent article “What does it mean to ask for an ‘explainable’ algorithm?” He sets up an excellent framework for the discussion, and the comments about his article raise differing points of view we should consider.
Felton says that “one of the standard critiques of using algorithms for decision-making about people, and especially for consequential decisions about access to housing, credit, education, and so on, is that the algorithms don’t provide an ‘explanation’ for their results or the results aren’t ‘interpretable.’ This is a serious issue, but discussions of it are often frustrating. The reason, I think, is that different people mean different things when they ask for an explanation of an algorithm’s results”. Felton discusses four types of explainabilty:
1. A claim of confidentiality (institutional/legal). Someone withholds relevant information about how a decision is made. 2. Complexity (barrier to big picture understanding). Details about the algorithm are difficult to explain, but the impact of the results on a person can still be understood. 3. Unreasonableness (results don’t make sense). The workings of the algorithm are clear, and are justified by statistical evidence, but the nature of how our world functions isn’t clear. 4. Injustice (justification for designing the algorithm). Using the algorithm is unfair, unjust, or morally wrong.
In addition, SIGAI should provide input on the nature of AI systems and what it means to “explain” how decision-making AI technologies work – for example, the role of algorithms in supervised and unsupervised systems versus the choices of data and design options in creating an operational system.
Your comments are welcome. Also, please share what work you may be doing in the area of algorithmic transparency.
In a message to USACM members, ACM Director of Public Policy Renee Dopplick, said, “EUACM has endorsed the Statement on Algorithmic Transparency and Accountability. Furthering its impacts, we are re-releasing it as a joint statement with a related media release. The USACM-EUACM Joint Statement demonstrates and affirms shared support for these principles to help minimize the potential for harm in algorithmic decision making and thus strengthens our ability to further expand our policy and media impacts.”
The joint statement aims to present the technical challenges and opportunities to prevent and mitigate potential harmful bias. The set of principles, consistent with the ACM Code of Ethics, is included in the statement and is intended to support the benefits of algorithmic decision-making while addressing these concerns.
The Principles for Algorithmic Transparency and Accountability from the joint statement are as follows:
Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.
Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.
As your public policy officer, I have joined the USACM. My goals are to introduce AI matters into USACM discussions and to relay AI-related ideas and issues from USACM to SIGAI members through blog postings.
Here is some information about USACM:
Mission The U.S. Public Policy Council of ACM (USACM) is chartered as the focal point for ACM’s interaction with U.S. government organizations, the computing community, and the U.S. public in all matters of U.S. public policy related to information technology and computing — except issues in science and math education relevant to computing and computer science, which is the responsibility of the Educational Policy Committee (EPC). The USACM Council superseded the former ACM U.S. Public Policy standing committee.
The USACM is authorized to take official policy positions. These positions reflect the position of the USACM and not necessarily that of ACM. Policy positions of USACM are decided by a majority vote of the USACM Executive Committee.
Committees Currently, USACM has the following seven standing committees listed below (with chairs):
USACM-Accessibility Harry Hochheiser (Accessibility & usability)
USACM-DigiGov Chris Bronk (Digital governance)
USACM-IP Paul Hyland (Intellectual property)
USACM-Law Andy Grosso (IT & Law)
USACM-Security Alec Yasinsac (Security)
USACM-Privacy Brian Dean (Privacy)
USACM-Voting Barbara Simons (Voting-related computing issues)
Working Groups Internet of Things (USACM-IOT)
Algorithmic Accountability (USACM-Algorithms)
Big Data (USACM-Data)
Welcome! This column is the third in our series profiling senior AI researchers. This month focuses on Peter Stone, a Professor at the University of Texas Austin and the COO and co-founder of Cogitai, Inc.
Peter Stone’s Bio
Dr. Peter Stone is the David Bruton, Jr. Centennial Professor and Associate Chair of Computer Science, as well as Chair of the Robotics Portfolio Program, at the University of Texas at Austin. In 2013 he was awarded the University of Texas System Regents’ Outstanding Teaching Award and in 2014 he was inducted into the UT Austin Academy of Distinguished Teachers, earning him the title of University Distinguished Teaching Professor. Professor Stone’s research interests in Artificial Intelligence include machine learning (especially reinforcement learning), multiagent systems, robotics, and e-commerce. Professor Stone received his Ph.D in Computer Science in 1998 from Carnegie Mellon University. From 1999 to 2002 he was a Senior Technical Staff Member in the Artificial Intelligence Principles Research Department at AT&T Labs – Research. He is an Alfred P. Sloan Research Fellow, Guggenheim Fellow, AAAI Fellow, Fulbright Scholar, and 2004 ONR Young Investigator. In 2003, he won an NSF CAREER award for his proposed long term research on learning agents in dynamic, collaborative, and adversarial multiagent environments, in 2007 he received the prestigious IJCAI Computers and Thought Award, given biannually to the top AI researcher under the age of 35, and in 2016 he was awarded the ACM/SIGAI Autonomous Agents Research Award.
How did you become interested in AI?
The first I remember becoming interested in AI was on a field trip to the University of Buffalo when I was in Middle School or early High School (I don’t remember which). The students rotated through a number of science labs and one of the ones I ended up in was a computer science “lab.” The thing that stands out in my mind is the professor showing us pictures of various shapes such as triangles and squares, pointing out how easy it was for us to distinguish them, but then asserting that nobody knew how to write a computer program to do so (to date myself, this must have been the mid ’80s). I had already started programming computers, but this got me interested in the concept of modeling intelligence with computers.
What made you decide the time was right for an AI startup?
Reinforcement learning has been a relatively “niche” area of AI since I became interested in it my first year of graduate school. But with recent advances, I became convinced that now was the time to move to the next level and work on problems that are only possible to attack in a commercial setting.
How did I become convinced? For that, I owe the credit to Mark Ring, one of my co-founders at Cogitai. He and I met at the first NIPS conference I attended back in the mid ’90s. We’ve stayed in touch intermittently. But then in the fall of 2014 he visited Austin and got in touch. He pitched the idea to me of starting a company based on continual learning, and it just made sense.
What professional achievement are you most proud of?
I’m made proud over and over again by the achievements of my students and postdocs. I’ve been very fortunate to work with a phenomenal group of individuals, both technically and personally. Nothing makes me happier than seeing each succeed in his or her own way, and to think that I played some small role in it.
What do you wish you had known as a Ph.D. student or early researcher?
It’s cliche, but it’s true. There’s no better time of life than when you’re a Ph.D. student. You have the freedom to pursue one idea that you’re passionate about to the greatest possible, with very few other responsibilities. You don’t have the status, appreciation, or salary that you deserve and that you’ll eventually inevitably get. And yes, there are pressures. But your job is to learn and to change the world in some small way. I didn’t appreciate it when I was a student even though my advisor (Manuela Veloso) told me. And I don’t expect my students to believe me when I tell them now. But over time I hope they come to appreciate it as I have. I loved my time as a Ph.D. student. But if I had known how many aspects of that time of life would be fleeting, I may have appreciated it even more.
What would you have chosen as your career if you hadn’t gone into AI?
I have no idea. When I graduated from the University of Chicago as an undergrad, I applied to 4 CS Ph.D. programs, the Peace Corps, and Teach for America. CMU was the only Ph.D. program that admitted me. So I probably would have done the Peace Corps or Teach for America. Who knows where that would have led me?
What is a “typical” day like for you?
I live a very full life. Every day I spend as much time with my family as they’ll let me (teenagers….) and get some sort of exercise (usually either soccer, swimming, running, or biking). I also play my violin about 3-4 times per week. I schedule those things, and other aspects of my social life, and then work in all my “free” time. That usually means catching up on email in the morning, attending meetings with students and colleagues either in person or by skype, reading articles, and editing students’ papers. And I work late at night and on weekends when there’s no “fun” scheduled. But really, there’s no “typical” day. Some days I’m consumed with reading; others with proposal writing; others with negotiations with prospective employees; others with university politics; others with event organization; others with coming up with new ideas to burning problems.
I do a lot of multitasking, and I’m no better at it than anyone else. But I’m never bored.
How do you balance being involved in so many different aspects of the AI community?
I don’t know. I have many interests and I can’t help but pursue them all. And I multitask.
What is your favorite CS or AI-related movie or book and why?
Rather than a book, I’ll choose an author. As a teenager, I read Isaac Asimov’s books voratiously – both his fiction (of course “I, Robot” made an impression, but the Foundation series was always my favorite), and his non-fiction. He influenced my thoughts and imagination greatly.
Today’s blog post seeks to focus on, and initiate a discussion about, the current administration’s positions on AI R&D support and public policies. We would like to know SIGAI members’ views on the important areas of concern for AI-related policies.
Miles Brundage and Joanna Bryson argued in August 2016 (see Smart Policies for Artificial Intelligence) that a de facto artificial intelligence policy already exists: “a patchwork of policies impacting the field of AI’s development in myriad ways. The key question related to AI policy, then, is not whether AI should be governed at all, but how it is currently being governed, and how that governance might become more informed, integrated, effective, and anticipatory.”
Some potential implications of AI for society involve the speed of change due to advances in AI technology; loss of individual control and privacy; job destruction due to automation; and the need for laws and public policy on AI technology’s role in the transformation of society. An important point is that, compared to the industrial revolution, AI’s impact is happening much faster and at a much larger scale of use than past technological advances. Organizations need to recognize the likelihood of disruption of operations that will happen whether or not change is intentional and planned.
In our current environment, we need to examine the extent of the new administration’s understanding of AI technology and the need for policies, laws, and planning. So far, not much information is available — from specifics about who will be the head of the National Highway Traffic Safety Agency (NHTSA), the main federal agency that regulates car safety, to the administration’s view of time scales. For example, the administration may take the position that AI will not cause job losses for many decades, which view could distort assumptions about labor market trends and lead to policy mistakes. These views on the future of AI could impact policies that promote programs to promote entrepreneurship and job creation. A few days ago an executive order established the American Technology Council with an initial focus on information technology. The status of the White House Office of Science and Technology Policy is not available on the OSTP Website. AI technology and applications will continue to grow rapidly, but whether or not public policy will keep pace is in doubt.
Be a Force for Science: Advocating for Science Beyond the March
Wednesday, April 19, 2017 2:00 p.m. – 3:00 p.m. ET
Register Here for the free AAAS webinar to learn about practical, concrete steps you can take to be a science advocate locally, nationally and internationally. The panel of communications and advocacy experts will share best practices on outreach topics, including:
• How to communicate the importance of evidence-based decision making to policymakers.
• How to work with the media.
• How to share the value of science and its impact with the public.
AAAS will also unveil an online advocacy toolkit.
American Society for Cell Biology
Interim Director of Public Affairs
American Physical Society
Vice President of Communications
Moderator: Erin Heath
Associate Director, Office of Government Relations
In this post, I report on my attendance at an excellent Annual AAAS Forum on Science & Technology Policy held on March 27th in Washington, DC.
Very interesting presentations included ones on federal agency priorities by NIH Director Francis Collins and NSF Director France Córdova. While most everyone at the Forum was worried about the coming administration’s funding for R&D, several exciting initiatives were discussed such as NSF’s idea for “Harnessing Data for 21st Century Science and Engineering” and “Shaping the Human-Technology Frontier”, of particular interest to SIGAI (see a detailed description). Likewise, NIH is embarking on their “All of Us” research program aimed at extending precision medicine to all diseases.
Back to the concern about government support for science & technology funding, Matt Hourihan, who runs the R&D Budget and Policy Program at AAAS, gave preliminary perspectives on the next federal budget’s impact on R&D. See an interview with Matt.
He compared the responses by Congress in previous administrations; for example, bipartisan pushback on efforts to reduce NIH budgets. He also discussed the relative emphasis in administrations on applied vs. basic research funding in non-defense spending, and the possibility of reducing applied funding in the next budget. Key slides and details from his presentation are available.
Supporting articles, with great charts and major insights, are The Trump Administration’s Science Budget: Toughest Since Apollo? “In fact, there’s a strong argument to be made that the first Trump Administration budget is the toughest of the post-Apollo era for science and technology, even with substantial information gaps still to be filled in.” First Trump Budget Proposes Massive Cuts to Several Science Agencies While still waiting for details, “the picture that does emerge so far is one of an Administration seeking to substantially scale back the size of the federal science and technology enterprise nearly across the board – in some cases, through agency-level cuts not seen in decades.”
One more highlight was the luncheon talk by Cori Bargmann, President of Science for the Chan Zuckerberg Initiative, on long-term funding for advancing human potential and promoting equal opportunity.
The SIGAI shares the concerns of its parent organization ACM about the implications of recent executive orders and statements by President Trump and his administration. We request that the administration’s current and future actions not negatively affect members of the scientific community and their work. We encourage SIGAI members to choose actions that suit their individual positions on potential threats to the conduct of scientific work and on actions that may impede the AI community from pursuing and communicating scientific work. We recommend joining actions within ACM and those of other scientific organizations such as AAAS We request that SIGAI members share their efforts and experiences and welcome all input and feedback at https://sigai.acm.org/aimatters/blog.
In this post, we suggest opportunities to act upon our concerns:
The March for Science on April 22nd is planned to demonstrate our passion for science and to call for support and safeguards for the scientific community. Recent policy changes have caused heightened worry among scientists.
The AAAS is calling on scientists to Be The Force For Science. They say, “The Trump Administration’s proposed budget would cripple the science and technology enterprise through short-sighted cuts to discovery science programs and critical mission agencies alike.”
With the events of the past several months, the officers are interested in making SIGAI’s own statement about the immediate and long term future of AI, technology, and science in the United States. The travel ban was just the first of issues that are likely to unfold and that may impede the AI community from pursuing and communicating scientific work. Other areas of immediate concern include appointments to the administration’s science positions, such as the White House Office of Science & Technology Policy, and now the looming budget cuts for non-defense spending. Depending on how AI is framed to the administration, we could be negatively impacted if, for example, AI R&D appear to be threatening jobs.
In this blog, we encourage a thorough discussion of a possible statement by SIGAI. Included in this post are ones by other groups and a draft statement to get our discussion started.
Please give your feedback as Comments to this blog post and by sending your thoughts to Larry Medsker at LRM@gwu.edu.
DRAFT Statement by ACM SIGAI DRAFT
The SIGAI shares the concerns of their parent organization, ACM, about the implications of recent executive orders and statements by President Trump and his administration. We request that current and future actions will not negatively affect members of the scientific community and their work.
We encourage SIGAI members to choose actions that suit their individual positions on potential threats to the conduct of scientific work and on actions that may impede the AI community from pursuing and communicating scientific work. We recommend joining avenues within ACM and the action plans of other scientific organizations such as AAAS and the March for Science on April 22.
We request that SIGAI members share their efforts and experiences and welcome all input and feedback at https://sigai.acm.org/aimatters/blog.
Statements by Other Groups
“The Association for Computing Machinery, a global scientific and educational organization representing the computing community, expresses concern over US President Donald J. Trump’s Executive Order imposing suspension of visas to nationals of seven countries.
“The open exchange of ideas and the freedom of thought and expression are central to the aims and goals of ACM. ACM supports the statute of International Council for Science in that the free and responsible practice of science is fundamental to scientific advancement and human and environmental well-being. Such practice, in all its aspects, requires freedom of movement, association, expression and communication for scientists. All individuals are entitled to participate in any ACM activity.”
“The SIGARCH executive committee shares the concerns of its parent organization, ACM, about the implications of the USA president’s executive order restricting entry of certain foreign nationals to the USA. These restrictions will not only affect scientists and members of our community who live outside of the USA, but they also impact the ability of many within the USA, in particular students, to travel. SIGARCH does not believe in, nor does it endorse, discrimination based on race, gender, faith, nationality or culture and is fully committed to its mission in spite of these restrictions. SIGARCH will be working on policies to best address this situation. Meanwhile, we strongly encourage all our sponsored events to provide support (e.g., technologies for remote participation) to maximize inclusive participation of our broader scientific community worldwide. Proposals for financial support towards this end should be submitted to the SIGARCH treasurer and will be considered on a case by case basis. We encourage event organizers to share their efforts and experiences and welcome all input and feedback at infodir_SIGARCH@acm.org.”
“Scientific progress depends on openness, transparency, and the free flow of ideas. The United States has always attracted and benefited from international scientific talent because of these principles.
“The American Association for the Advancement of Science (AAAS), the world’s largest general science society, has consistently encouraged international cooperation between scientists. We know that fostering safe and responsible conduct of research is essential for scientific advancement, national prosperity, and international security. Therefore, the detaining of students and scientists that have already been screened, processed, and approved to receive a visa to visit the United States is contrary to the spirit of science to pursue scholarly and professional interests. In order for science and the economy to prosper, students and scientists must be free to study and work with colleagues in other countries.
“The January 27, 2017 White House executive order on visas and immigration will discourage many of the best and brightest international students, scholars, and scientists from studying and working in the United States, or attending academic and scientific conferences. Implementation of this policy compromises the United States’ ability to attract international scientific talent and maintain scientific and economic leadership. It is in our national interest to take a balanced approach to immigration that protects national security interests and advances our scientific leadership.
“After the tragic events of September 11, 2001, as restrictions on immigration and foreign national travel were put in place to safeguard our national security, AAAS and other organizations worked closely with the Bush administration to advise on a balanced approach. We strongly recommend a similar discussion with officials in the Trump administration.”
Erik Brynjolfsson is an economist at MIT and co-author, with Andrew McAfee, of The Second Machine Age, a book that asks “what jobs will be left once software has perfected the art of driving cars, translating speech, and other tasks once considered the domain of humans.” The rapidly emerging fields of AI and data science, spawned by the ubiquitous role of data in our society, is producing tools and methods that surpass human ability to manage and analyze data.
You can often hear people say that, just like other technological revolutions, new jobs will be created to replace the old ones. But is this just a rationalization? Maybe the rate of technological change is of a different order in the Information and Big Data age compared to the industrial revolution. A more optimistic outcome than automation leading to mass unemployment is to see these technologies as tools that will allow people to achieve more; for example, working together with cognitive assistants.
So, which way will it be?
For AI and data science professionals, don’t we have a responsibility to use and seek data-based evidence to support our positions on the impact of data science and AI on future employment? Can we find and analyze data on what happens to actual workers being replaced over the past five years? Some researchers estimate that 50 percent of total US employment is in the high-risk category, meaning that associated occupations are potentially automatable. In the first wave, they predict that most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labor in production occupations are likely to be substituted by smart-computer capital.
Policymaking will no doubt lag behind the technology. Now is the time to discuss and advocate policies that address (1) innovating our education systems, (2) redefining employment, and (3) investigating alternate economic systems.