USACM

As your public policy officer, I have joined the USACM.  My goals are to introduce AI matters into USACM discussions and to relay AI-related ideas and issues from USACM to SIGAI members through blog postings.
Here is some information about USACM:

Mission
The U.S. Public Policy Council of ACM (USACM) is chartered as the focal point for ACM’s interaction with U.S. government organizations, the computing community, and the U.S. public in all matters of U.S. public policy related to information technology and computing — except issues in science and math education relevant to computing and computer science, which is the responsibility of the Educational Policy Committee (EPC). The USACM Council superseded the former ACM U.S. Public Policy standing committee.

The USACM is authorized to take official policy positions.  These positions reflect the position of the USACM and not necessarily that of ACM. Policy positions of USACM are decided by a majority vote of the USACM Executive Committee.

Committees
Currently, USACM has the following seven standing committees listed below (with chairs):
USACM-Accessibility  Harry Hochheiser (Accessibility & usability)
USACM-DigiGov          Chris Bronk          (Digital governance)
USACM-IP                   Paul Hyland          (Intellectual property)
USACM-Law                Andy Grosso         (IT & Law)
USACM-Security         Alec Yasinsac        (Security)
USACM-Privacy          Brian Dean            (Privacy)
USACM-Voting           Barbara Simons    (Voting-related computing issues)

Working Groups
Internet of Things (USACM-IOT)
Algorithmic Accountability (USACM-Algorithms)
Big Data (USACM-Data)

Please find more information about USACM at http://usacm.acm.org/
and the brochure at
http://usacm.acm.org/images/documents/USACMBrochure.pdf

AI Matters Interview with Peter Stone

Welcome!  This column is the third in our series profiling senior AI researchers. This month focuses on Peter Stone, a Professor at the University of Texas Austin and the COO and co-founder of Cogitai, Inc.

Peter Stone’s Bio

Peter Stone

Dr. Peter Stone is the David Bruton, Jr. Centennial Professor and Associate Chair of Computer Science, as well as Chair of the Robotics Portfolio Program, at the University of Texas at Austin. In 2013 he was awarded the University of Texas System Regents’ Outstanding Teaching Award and in 2014 he was inducted into the UT Austin Academy of Distinguished Teachers, earning him the title of University Distinguished Teaching Professor. Professor Stone’s research interests in Artificial Intelligence include machine learning (especially reinforcement learning), multiagent systems, robotics, and e-commerce. Professor Stone received his Ph.D in Computer Science in 1998 from Carnegie Mellon University. From 1999 to 2002 he was a Senior Technical Staff Member in the Artificial Intelligence Principles Research Department at AT&T Labs – Research. He is an Alfred P. Sloan Research Fellow, Guggenheim Fellow, AAAI Fellow, Fulbright Scholar, and 2004 ONR Young Investigator. In 2003, he won an NSF CAREER award for his proposed long term research on learning agents in dynamic, collaborative, and adversarial multiagent environments, in 2007 he received the prestigious IJCAI Computers and Thought Award, given biannually to the top AI researcher under the age of 35, and in 2016 he was awarded the ACM/SIGAI Autonomous Agents Research Award.

How did you become interested in AI?

The first I remember becoming interested in AI was on a field trip to the University of Buffalo when I was in Middle School or early High School (I don’t remember which).  The students rotated through a number of science labs and one of the ones I ended up in was a computer science “lab.”  The thing that stands out in my mind is the professor showing us pictures of various shapes such as triangles and squares, pointing out how easy it was for us to distinguish them, but then asserting that nobody knew how to write a computer program to do so (to date myself, this must have been the mid ’80s).  I had already started programming computers, but this got me interested in the concept of modeling intelligence with computers.

What made you decide the time was right for an AI startup?

Reinforcement learning has been a relatively “niche” area of AI since I became interested in it my first year of graduate school.  But with recent advances, I became convinced that now was the time to move to the next level and work on problems that are only possible to attack in a commercial setting.

How did I become convinced?  For that, I owe the credit to Mark Ring, one of my co-founders at Cogitai.  He and I met at the first NIPS conference I attended back in the mid ’90s.  We’ve stayed in touch intermittently.  But then in the fall of 2014 he visited Austin and got in touch.  He pitched the idea to me of starting a company based on continual learning, and it just made sense.

What professional achievement are you most proud of?

I’m made proud over and over again by the achievements of my students and postdocs.  I’ve been very fortunate to work with a phenomenal group of individuals, both technically and personally.  Nothing makes me happier than seeing each succeed in his or her own way, and to think that I played some small role in it.

What do you wish you had known as a Ph.D. student or early researcher?

It’s cliche, but it’s true.  There’s no better time of life than when you’re a Ph.D. student.  You have the freedom to pursue one idea that you’re passionate about to the greatest possible, with very few other responsibilities.  You don’t have the status, appreciation, or salary that you deserve and that you’ll eventually inevitably get.  And yes, there are pressures.  But your job is to learn and to change the world in some small way.  I didn’t appreciate it when I was a student even though my advisor (Manuela Veloso) told me.  And I don’t expect my students to believe me when I tell them now.  But over time I hope they come to appreciate it as I have.  I loved my time as a Ph.D. student. But if I had known how many aspects of that time of life would be fleeting, I may have appreciated it even more.

What would you have chosen as your career if you hadn’t gone into AI?

I have no idea.  When I graduated from the University of Chicago as an undergrad, I applied to 4 CS Ph.D. programs, the Peace Corps, and Teach for America.  CMU was the only Ph.D. program that admitted me.  So I probably would have done the Peace Corps or Teach for America.  Who knows where that would have led me?

What is a “typical” day like for you?

I live a very full life.  Every day I spend as much time with my family as they’ll let me (teenagers….) and get some sort of exercise (usually either soccer, swimming, running, or biking).  I also play my violin about 3-4 times per week.  I schedule those things, and other aspects of my social life, and then work in all my “free” time.  That usually means catching up on email in the morning, attending meetings with students and colleagues either in person or by skype, reading articles, and editing students’ papers.  And I work late at night and on weekends when there’s no “fun” scheduled.  But really, there’s no “typical” day.  Some days I’m consumed with reading; others with proposal writing; others with negotiations with prospective employees; others with university politics; others with event organization; others with coming up with new ideas to burning problems.

I do a lot of multitasking, and I’m no better at it than anyone else. But I’m never bored.

How do you balance being involved in so many different aspects of the AI community?

I don’t know.  I have many interests and I can’t help but pursue them all.  And I multitask.

What is your favorite CS or AI-related movie or book and why?

Rather than a book, I’ll choose an author.  As a teenager, I read Isaac Asimov’s books voratiously – both his fiction (of course “I, Robot” made an impression, but the Foundation series was always my favorite), and his non-fiction.  He influenced my thoughts and imagination greatly.

Policy Issues for AI Discussion

Today’s blog post seeks to focus on, and initiate a discussion about, the current administration’s positions on AI R&D support and public policies. We would like to know SIGAI members’ views on the important areas of concern for AI-related policies.

In December 2016, the Obama administration released a report on Artificial Intelligence, Automation, and the Economy. This report followed the Administration’s previous report, Preparing for the Future of Artificial Intelligence, which recommended that the White House publish a report on the economic impacts of artificial intelligence by the end of 2016. The reports addressed readiness of the United States for a future in which artificial intelligence plays a growing role. The Obama Administration’s views are described in the Roadmap for AI Policy by Ajay Agrawal, Joshua Gans, and Avi Goldfarb in the December 21, 2016, Harvard Business Review. Some reference points from outside the US are Artificial intelligence: an overview for policy-makers from the U.K. and China’s planning for AI.

Miles Brundage and Joanna Bryson argued in August 2016 (see Smart Policies for Artificial Intelligence) that a de facto artificial intelligence policy already exists: “a patchwork of policies impacting the field of AI’s development in myriad ways. The key question related to AI policy, then, is not whether AI should be governed at all, but how it is currently being governed, and how that governance might become more informed, integrated, effective, and anticipatory.”

Some potential implications of AI for society involve the speed of change due to advances in AI technology; loss of individual control and privacy; job destruction due to automation; and the need for laws and public policy on AI technology’s role in the transformation of society. An important point is that, compared to the industrial revolution, AI’s impact is happening much faster and at a much larger scale of use than past technological advances. Organizations need to recognize the likelihood of disruption of operations that will happen whether or not change is intentional and planned.

In our current environment, we need to examine the extent of the new administration’s understanding of AI technology and the need for policies, laws, and planning. So far, not much information is available — from specifics about who will be the head of the National Highway Traffic Safety Agency (NHTSA), the main federal agency that regulates car safety, to the administration’s view of time scales. For example, the administration may take the position that AI will not cause job losses for many decades, which view could distort assumptions about labor market trends and lead to policy mistakes. These views on the future of AI could impact policies that promote programs to promote entrepreneurship and job creation. A few days ago an executive order established the American Technology Council with an initial focus on information technology. The status of the White House Office of Science and Technology Policy is not available on the OSTP Website. AI technology and applications will continue to grow rapidly, but whether or not public policy will keep pace is in doubt.

Please share your ideas via comments to this post and email messages to aimatters@sigai.acm.org.

Advocating for Science Beyond the March

Be a Force for Science: Advocating for Science Beyond the March
Wednesday, April 19, 2017 2:00 p.m. – 3:00 p.m. ET

Register Here  for the free AAAS webinar to learn about practical, concrete steps you can take to be a science advocate locally, nationally and internationally. The panel of communications and advocacy experts will share best practices on outreach topics, including:
• How to communicate the importance of evidence-based decision making    to policymakers.
• How to work with the media.
• How to share the value of science and its impact with the public.

AAAS will also unveil an online advocacy toolkit.

Panelists:
Erika Shugart
Executive Director
American Society for Cell Biology

Francis Slakey
Interim Director of Public Affairs
American Physical Society

Suzanne Ffolkes
Vice President of Communications
Research!America

Moderator: Erin Heath
Associate Director, Office of Government Relations
AAAS

Science & Technology Policy Forum

In this post, I report on my attendance at an excellent Annual AAAS Forum  on Science & Technology Policy held on March 27th in Washington, DC.

Very interesting presentations included ones on federal agency priorities by NIH Director Francis Collins and NSF Director France Córdova. While most everyone at the Forum was worried about the coming administration’s funding for R&D, several exciting initiatives were discussed such as NSF’s idea for “Harnessing Data for 21st Century Science and Engineering” and “Shaping the Human-Technology Frontier”, of particular interest to SIGAI (see a detailed description). Likewise, NIH is embarking on their “All of Us” research program aimed at extending precision medicine to all diseases.

Back to the concern about government support for science & technology funding, Matt Hourihan, who runs the R&D Budget and Policy Program at AAAS, gave preliminary perspectives on the next federal budget’s impact on R&D. See an interview with Matt.

He compared the responses by Congress in previous administrations; for example, bipartisan pushback on efforts to reduce NIH budgets. He also discussed the relative emphasis in administrations on applied vs. basic research funding in non-defense spending, and the possibility of reducing applied funding in the next budget. Key slides and details from his presentation are available.

Supporting articles, with great charts and major insights, are
The Trump Administration’s Science Budget: Toughest Since Apollo?
“In fact, there’s a strong argument to be made that the first Trump Administration budget is the toughest of the post-Apollo era for science and technology, even with substantial information gaps still to be filled in.”
First Trump Budget Proposes Massive Cuts to Several Science Agencies
While still waiting for details, “the picture that does emerge so far is one of an Administration seeking to substantially scale back the size of the federal science and technology enterprise nearly across the board – in some cases, through agency-level cuts not seen in decades.”

One more highlight was the luncheon talk by Cori Bargmann, President of Science for the Chan Zuckerberg Initiative, on long-term funding for advancing human potential and promoting equal opportunity.

Stay tuned as the R&D budget evolves!

SIGAI Statement on New Federal Policies

Draft Statement by ACM SIGAI

The SIGAI shares the concerns of its parent organization ACM about the implications of recent executive orders and statements by President Trump and his administration. We request that the administration’s current and future actions not negatively affect members of the scientific community and their work. We encourage SIGAI members to choose actions that suit their individual positions on potential threats to the conduct of scientific work and on actions that may impede the AI community from pursuing and communicating scientific work. We recommend joining actions within ACM and those of other scientific organizations such as AAAS  We request that SIGAI members share their efforts and experiences and welcome all input and feedback at https://sigai.acm.org/aimatters/blog.

In this post, we suggest opportunities to act upon our concerns:

The March for Science on April 22nd is planned to demonstrate our passion for science and to call for support and safeguards for the scientific community. Recent policy changes have caused heightened worry among scientists.

The AAAS is calling on scientists to Be The Force For Science. They say, “The Trump Administration’s proposed budget would cripple the science and technology enterprise through short-sighted cuts to discovery science programs and critical mission agencies alike.”

 

SIGAI Science Policy Statement Discussion

With the events of the past several months, the officers are interested in making SIGAI’s own statement about the immediate and long term future of AI, technology, and science in the United States. The travel ban was just the first of issues that are likely to unfold and that may impede the AI community from pursuing and communicating scientific work. Other areas of immediate concern include appointments to the administration’s science positions, such as the White House Office of Science & Technology Policy, and now the looming budget cuts for non-defense spending. Depending on how AI is framed to the administration, we could be negatively impacted if, for example, AI R&D appear to be threatening jobs.

In this blog, we encourage a thorough discussion of a possible statement by SIGAI. Included in this post are ones by other groups and a draft statement to get our discussion started.

Please give your feedback as Comments to this blog post and by sending your thoughts to Larry Medsker at LRM@gwu.edu.

——————————————————–

DRAFT       Statement by ACM SIGAI       DRAFT

The SIGAI shares the concerns of their parent organization, ACM, about the implications of recent executive orders and statements by President Trump and his administration. We request that current and future actions will not negatively affect members of the scientific community and their work.
We encourage SIGAI members to choose actions that suit their individual positions on potential threats to the conduct of scientific work and on actions that may impede the AI community from pursuing and communicating scientific work. We recommend joining avenues within ACM and the action plans of other scientific organizations such as AAAS and the March for Science on April 22.
We request that SIGAI members share their efforts and experiences and welcome all input and feedback at https://sigai.acm.org/aimatters/blog.

——————————————————–

Statements by Other Groups

ACM Statement

“The Association for Computing Machinery, a global scientific and educational organization representing the computing community, expresses concern over US President Donald J. Trump’s Executive Order imposing suspension of visas to nationals of seven countries.

“The open exchange of ideas and the freedom of thought and expression are central to the aims and goals of ACM. ACM supports the statute of International Council for Science in that the free and responsible practice of science is fundamental to scientific advancement and human and environmental well-being. Such practice, in all its aspects, requires freedom of movement, association, expression and communication for scientists. All individuals are entitled to participate in any ACM activity.”

SIGARCH Statement

“The SIGARCH executive committee shares the concerns of its parent organization, ACM, about the implications of the USA president’s executive order restricting entry of certain foreign nationals to the USA. These restrictions will not only affect scientists and members of our community who live outside of the USA, but they also impact the ability of many within the USA, in particular students, to travel. SIGARCH does not believe in, nor does it endorse, discrimination based on race, gender, faith, nationality or culture and is fully committed to its mission in spite of these restrictions. SIGARCH will be working on policies to best address this situation. Meanwhile, we strongly encourage all our sponsored events to provide support (e.g., technologies for remote participation) to maximize inclusive participation of our broader scientific community worldwide. Proposals for financial support towards this end should be submitted to the SIGARCH treasurer and will be considered on a case by case basis. We encourage event organizers to share their efforts and experiences and welcome all input and feedback at infodir_SIGARCH@acm.org.”

AAAS Statement

Scientific progress depends on openness, transparency, and the free flow of ideas. The United States has always attracted and benefited from international scientific talent because of these principles.

“The American Association for the Advancement of Science (AAAS), the world’s largest general science society, has consistently encouraged international cooperation between scientists. We know that fostering safe and responsible conduct of research is essential for scientific advancement, national prosperity, and international security. Therefore, the detaining of students and scientists that have already been screened, processed, and approved to receive a visa to visit the United States is contrary to the spirit of science to pursue scholarly and professional interests. In order for science and the economy to prosper, students and scientists must be free to study and work with colleagues in other countries.

“The January 27, 2017 White House executive order on visas and immigration will discourage many of the best and brightest international students, scholars, and scientists from studying and working in the United States, or attending academic and scientific conferences. Implementation of this policy compromises the United States’ ability to attract international scientific talent and maintain scientific and economic leadership. It is in our national interest to take a balanced approach to immigration that protects national security interests and advances our scientific leadership.

“After the tragic events of September 11, 2001, as restrictions on immigration and foreign national travel were put in place to safeguard our national security, AAAS and other organizations worked closely with the Bush administration to advise on a balanced approach. We strongly recommend a similar discussion with officials in the Trump administration.”

AI and Future Employment

Erik Brynjolfsson is an economist at MIT and co-author, with Andrew McAfee, of The Second Machine Age, a book that asks “what jobs will be left once software has perfected the art of driving cars, translating speech, and other tasks once considered the domain of humans.” The rapidly emerging fields of AI and data science, spawned by the ubiquitous role of data in our society, is producing tools and methods that surpass human ability to manage and analyze data.
You can often hear people say that, just like other technological revolutions, new jobs will be created to replace the old ones. But is this just a rationalization? Maybe the rate of technological change is of a different order in the Information and Big Data age compared to the industrial revolution. A more optimistic outcome than automation leading to mass unemployment is to see these technologies as tools that will allow people to achieve more; for example, working together with cognitive assistants.
So, which way will it be?
For AI and data science professionals, don’t we have a responsibility to use and seek data-based evidence to support our positions on the impact of data science and AI on future employment? Can we find and analyze data on what happens to actual workers being replaced over the past five years? Some researchers estimate that 50 percent of total US employment is in the high-risk category, meaning that associated occupations are potentially automatable. In the first wave, they predict that most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labor in production occupations are likely to be substituted by smart-computer capital.

Policy Matters
Policymaking will no doubt lag behind the technology. Now is the time to discuss and advocate policies that address (1) innovating our education systems, (2) redefining employment, and (3) investigating alternate economic systems.

Your thoughts?

Kim-Mai Cutler Interview with Jack Clark

Kim-Mai Cutler at Initialized Capital interviewed Jack Clark of OpenAI about The Public Policy Implications of Artificial Intelligence. Some of the issues are important to discuss in the AI Matters policy thread. Particularly, the need for AI policy and regulations that anticipate:
Cognitive assistance
Productivity, distribution of technology, and the exacerbation of inequality
Mobility and the need for lifelong learning
Strategic funding for AI research
AI technology and net loss vs net increase in jobs

Interested to hear your ideas and reactions to the interview!

Artificial Intelligence and Life in 2030

The Stanford One Hundred Year Study on Artificial Intelligence includes issues we should discuss.  For example, in their study they remind that “Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies. Rapid advances in AI research and, especially, its applications require experts in these sectors to develop new concepts and metaphors for law and policy. Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from promulgating racial discrimination or financial cheating? Who should reap the gains of efficiencies enabled by AI technologies and what protections should be afforded to people whose skills are rendered obsolete? As people integrate AI more broadly and deeply into industrial processes and consumer products, best practices need to be spread, and regulatory regimes adapted.”

Learn more from Artificial Intelligence and Life in 2030 —  One Hundred Year Study on Artificial Intelligence, Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA,  September 2016. https://ai100.stanford.edu/2016-report