News from AAAI FSS-17

This year’s Fall Symposium Series (November 9-11) provided updates and insights on advances in research and technology, including resources for discussion of AI policy issues.  The symposia addressed topics in human-robot interaction, cognitive assistance in government and public sectors, military applications, human-robot collaboration, and a standard model of the mind. An important theme for public policy was the advances and questions on human-AI collaboration.

The cognitive assistance sessions this year focused on government and public sector applications, particularly autonomous systems, healthcare, and education. Human-technology collaboration advances involved discussions of issues relevant to public policy, including privacy and algorithmic transparency. The increasing mix of AI with humans in ubiquitous public and private systems was the subject of discussions about new technological developments and the need for understanding and anticipating challenges for communication and collaboration. Particular issues were on jobs and de-skilling of the workforce, credit and blame when AI applications work or fail, and the role of humans with autonomous systems.

IBM’s Jim Spohrer made an outstanding presentation “A Look Toward the Future”, incorporating his rich experience and current work on anticipated impacts of new technology. His slides are well worth studying, especially for the role of hardware in game-changing technologies with likely milestones every ten years through 2045. Radical developments in technology would challenge public policy in ways that are difficult to imagine, but current policymakers and the AI community need to try.

Particular takeaways, and anticipated subjects for future blogs, are about the importance of likely far-reaching research and applications on public policy. The degree and nature of cognitive collaboration with machines, the future of jobs, new demands on educational systems as cognitive assistance becomes deep and pervasive, and the anticipated radical changes in AI capabilities put the challenges to public policy in a new perspective. AI researchers and developers need to partner with social scientists to anticipate communication and societal issues as human-machine collaboration accelerates, both in system development teams and in the new workforce.

Some recommended topics for thinking about AI technology and policy are the following:
Jim Spohrer’s slideshare
Noriko Arai’s TED talk on Todai Robot
Humans, Robotics, and the Future of Manufacturing
New education systems and the future of work
Computing education: Coding vs. learning to use systems
Smart phone app “Seeing AI
AAAI for information related to science policy issues.

Public Policy Opportunities

USACM Council
The membership of USACM will be voting soon to elect at-large representatives to the USACM Council, with terms starting January 1st. At-large Council members whose terms expire this December 31st are Jean Camp, Simson Garfinkel, and Jonathan Smith. If you are a member of USACM and are interested in serving on USACM Council, please contact a member of the nominations committee. If there is someone is in line with what you think USACM should be doing, then please nominate that person. Only those who have been USACM members for at least one year as of January 1, 2018, are eligible. The deadline for having a slate of candidates is November 13th.

ACM Policy Award
Consider nominating someone for this award, which is made in alternate years and the initial one is yet to be made because insufficient nominations were received the first time around. “The ACM Policy Award was established in 2014 to recognize an individual or small group that had a significant positive impact on the formation or execution of public policy affecting computing or the computing community. This can be for education, service, or leadership in a technology position; for establishing an innovative program in policy education or advice; for building the community or community resources in technology policy; or other notable policy activity. The award is accompanied by a $10,000 prize.” Further information and instructions are available at http://awards.acm.org/policy/nominations.
The award can recognize one or more of the following:
– Contributions to policy while working in a policy position
– Distinguished service on and contributions to policy issues
– Advanced scholarly work that has impacted policy
The deadline for nominations is January 15, 2018.

Missed Opportunities — Federal Science Policy Offices
I reached out to people who might know of prospects for the current Administration to make important policy position appointments.
Not much to report:
1. The Administration has yet to nominate a Director for the White House Office of Science and Technology Policy (OSTP). OSTP director traditionally serves as the president’s science adviser.
2. Office of the Chief Technology Officer is also vacant. In the past, the CTO team helps shape Federal policies, initiatives, capacity, and investments that support the mission of harnessing the power of technology. They have also worked to anticipate and guard against the consequences that can accompany new discoveries and technologies.
3. The U.S. Department of Agriculture’s chief scientist nominee, Sam Clovis, recently withdrew his name from consideration. Clovis is a climate change denier with no training in science, food, or agriculture. For months, scientists, activists, and a broad coalition of groups have come together to demand that the Senate reject his nomination.

AAAS Policy News
For timely and objective information on current science and technology issues and assistance in understanding Federal science policy, check with the AAAS Office of Government Relations at https://www.aaas.org/program/govrelations
and the AAAS Policy and Public Statements at https://www.aaas.org/about/policy-and-public-statements.

Is it too late to address the moral , ethical, and economic issues introduced by the commercialization of AI?

What do recent deployments of AI mean to the public or the average citizen? Will AI be a transparent technology, invisible at the public policy level? Is it too late to address the moral , ethical, and economic issues introduced by the commercialization of AI?

On September 14, 2017 the NEOACM (Northeast Ohio ACM) Professional chapter held the “We come in peace 2” AI panel hosted by the McDonough Museum of Fine Art in Youngstown Ohio. The members of the panel were: Doug McCollough: CIO of Dublin Ohio, Dr. Shiqi Zhang: AI and Robotics Reseacher at Cleveland State University, Andrew Konya: Co-founder & CEO of Remesh, a Cleveland-based AI company,Dr. Jay Ramanathan: Executive Director of Arthapedia.zone, Paul Carlson: Intelligent Community Strategist for Columbus Ohio and Dr. Mark Vopat: Professor of Political Philosophy, Applied Ethics at Youngstown State University. Our moderator was Nikola Danaylov, author of the best selling book “Conversations with Future: 21 Visions for the 21st century”.

The goal of the panel was to was discuss the latent consequences both positive and negative of recent AI based technologies that are being deployed and reach the general public. The scope of the goal ranged from the ethics and policy that must be considered as smart cities are brought on line to the impact of robotics and decision making technologies in law enforcement. The panel visited such diverse subject matter as Cognitive Computing to Agent Belief. While the focus originally started out on AI deployments in cities in the state of Ohio, it became clear that most of the issues where universal in nature. The panel started at 6:00 p.m. EDT and it was just getting warmed up when we had to bring it to a close at 8:00 p.m. EDT. There just wasn’t time to get to all of the questions, or to do justice to all of the issues and topics that were introduced during the panel. There was a burning desire to continue the conversation and debate. So after a discussion with some of our fellow ACM members at SIGAI and the AI panelists we’ve decided to carry over some of that discussion to an AI-Matters blog in hopes that we could engage the broader AI community as well as have a more flexible format that would give us ample time and space. Some of the highlights for the AI Panel can be found at:

2017 AI Panel “We come in peace”

The plan is to tackle some of the subject matter in this blog and to handle other aspects of the subject matter in webinar form. We hope that our fellow SIGAI members will feel free to contribute to this conversation as it develops providing questions, insights, suggestions, and direction. The moderator Nikola Danaylov and the panelists have all agreed to participate in this blog so if this blog goes anything like the panel discussion, “hold on to your seats”! We want to dive into the questions such as what does this recent incarnation of “Artificial Intelligence” mean to the public or for the average citizen? What impact will it have on infrastructure and the economy? From a commercialization perspective has “AI” been displaced by machine learning and data science? If AI and machine learning become transparent technologies will it be possible to regulate their impact on society? Is it already too late to stop any potential negative impact of AI based technologies? And I for one am looking forward to a continuation of the discussion of just what constitutes agent beliefs, where they come from, and how will agent belief systems be dealt with at the public policy or commercialization level. And then again perhaps even these are the wrong questions to be asking if our concern is the public good. We hope you join us as we attempt to deal with these questions and more.

Cheers

Cameron Hughes
Current Chair NEOACM Professional Chapter
SIGAI Member

Joint Panel of ACM and IEEE

The new joint ACM/IEEE group met recently via conference calls to explore the idea of proposing a session at the 2018 RightsCon in Toronto on a topic of mutual interest to the two organizations’ ethics and policy members. Your SIGAI members Simson Garfinkel, Sven Koenig, Nick Mattei, and Larry Medsker are participating in the group. Stuart Shapiro, Chair of ACM US Public Policy Council, is representing ACM. Members from IEE include John C. Havens, Executive Director of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and Dr. Ansgar Koene, University of Nottingham and working group chair for IEEE Standard on Algorithm Bias Considerations.

The group meets again soon to propose a panel in the area of  bias and algorithmic accountability. SIGAI members are welcome to nominate panel members and volunteer. SIGAI members are also encouraged to contribute ideas that could focus the discussion and meet the following RightsCon goals:
– including speakers from a diverse range of backgrounds
– addressing an important challenge to human rights in the digital age
– engaging participants in a way that inspires real-world outcomes
(e.g., new policy approaches and innovative technology solutions)
– introducing new voices, new concepts, and fresh take on an issue
– having the potential to encourage cross-sector collaborations
– using an innovative format to present the idea and generate outcomes

The call for proposals mentions “Artificial Intelligence, Automation, and Algorithmic Accountability” as one of their program “buckets”. RightsCon is accepting presentation proposals until November 24, 2017. Sessions will have 16 program buckets, which cover topics including Digital Security and Encryption, Artificial Intelligence, Automation, Algorithmic Accountability, Misinformation, Journalism, and the Future of Online Media.

Computing Community Consortium

On October 23-24, 2017, the Computing Community Consortium (CCC) will hold the Computing Research: Addressing National Priorities and Societal Needs Symposium to address the current and future contribution of computing and its role in issues of societal needs.

Computing Community Consortium says it “has hosted dozens of research visioning workshops to imagine, discuss, and debate the future of computing and its role in addressing societal needs. The second CCC Computing Research symposium draws these topics into a program designed to illuminate current and future trends in computing and the potential for computing to address national challenges.”

You may also want to check out the CCC Blog at http://www.cccblog.org/ for policy issues of common interest for SIGAI members.

IEEE and ACM Collaborations on ATA

At last month’s USACM Panel at the National Press Club (reported in the AI Matters policy blog last time), I had the opportunity to talk with one of the panelists Dr. Ansgar Koene, Senior Research Fellow: UnBias, CaSMa & Horizon Policy Impact. Ansgar is at the Horizon Digital Economy Research Institute, University of Nottingham, and he is the working group chair for IEEE Standard on Algorithm Bias Considerations. Be sure to see Ansgar’s article about the ‘AI gaydar’ in Conversation: https://theconversation.com/machine-gaydar-ai-is-reinforcing-stereotypes-that-liberal-societies-are-trying-to-get-rid-of-83837.

Following the USACM Panel at the National Press Club, attendees discussed ways to bring together the voices of ACM and IEEE on Algorithmic Transparency and Accountability. One opportunity is at  RightsCon Toronto: May 16-18, 2018. The call for proposals mentions “Artificial Intelligence, Automation, and Algorithmic Accountability” as one of their program “buckets”. RightsCon is accepting proposals for presentations until November 24, 2017. Sessions will have 16 program buckets, which cover topics including Digital Security and Encryption, Artificial Intelligence, Automation, and Algorithmic Accountability to Misinformation, Journalism, and the Future of Online Media.

A new initiative is Local Champions at RightsCon Toronto, which features leading voices in Canada’s digital rights landscape. They plan to support thought leadership, program guidance, and topic identification to ensure that the most pressing issues are represented at RightsCon.

Dr. Koene also shared information about the IEEE P7001 Working Group on the IEEE Standard on Transparency of Autonomous Systems http://sites.ieee.org/sagroups-7001/. This working group is chaired by Prof. Alan Winfield who is also very interested in the idea of data recorders, like airplane ‘black boxes’, to provide insight into behavior of autonomous vehicles for accident investigation. http://www.cems.uwe.ac.uk/~a-winfield/

Please share additional opportunities for SIGAI members to join with other groups working on issues in algorithmic transparency and accountability. We welcome also your comments on the many AI applications and technologies that should be included in our focus on public policy.

New Conference: AAAI/ACM Conference on AI, Ethics, and Society

ACM SIGAI is pleased to announce the launch of the AAAI/ACM Conference on AI, Ethics, and Society, to be co-located with AAAI-18, February 2-3, 2018 in New Orleans. The Call for Papers is included below and is also available at  http://www.aies-conference.com/. Please note the October 31 deadline for submissions.

We hope to see you at the new conference in New Orleans next February!
************************

AAAI/ACM Conference on AI, Ethics, and Society
February 2-3, 2018
New Orleans, USA

http://www.aies-conference.com/

As AI is becoming more pervasive in our life, its impact on society is more significant and concerns and issues are raised regarding aspects such as value alignment, data bias and data policy, regulations, and workforce displacement. Only a multi-disciplinary and multi-stakeholder effort can find the best ways to address these concerns, including experts of various disciplines, such as AI, computer science, ethics, philosophy, economics, sociology, psychology, law, history, and politics. In order to address these issues in a scientific context, AAAI and ACM have joined forces to start a new conference, the AAAI/ACM Conference on AI, Ethics, and Society.

The first edition of this conference will be co-located with AAAI-18 on February 2-3, 2018 in New Orleans, USA. The program of the conference will include peer-reviewed paper presentations, invited talks, panels, and working sessions.

The conference welcomes contributions on a broad set of topics, included the following ones:

  • Building ethical AI systems
  • Value alignment
  • Moral machine decision making
  • Trust and explanations in AI systems
  • Fairness and Transparency in AI systems
  • Ethical design and development of AI systems
  • AI for social good
  • Human-level AI
  • Controlling AI
  • Impact of AI on workforce
  • Societal impact of AI
  • AI and law

Submitted papers should adopt a scientific approach to address any questions related to the above topics. Moreover, they should clearly establish the research contribution, its relevance, and its relation to prior research. All submissions must be made in the appropriate format, and within the specified length limit; details and a LaTeX template can be found at the conference web site.

We solicit papers (pdf file) of up to 6 pages + 1 page for references (AAAI format), submitted through the Easychair system.

We expect papers submitted by researchers of several disciplines (AI, computer science, philosophy, economics, law, and others). The program committee includes members that are experts in all the relevant areas, to ensure appropriate review of papers.

IMPORTANT NOTICE: To accommodate the publishing traditions of different fields, authors of accepted papers can ask that only a one-page abstract of the paper appear in the proceedings, along with a URL pointing to the full paper. Authors should guarantee the link to be reliable for at least two years. This option is available to accommodate subsequent publication in journals that would not consider results that have been published in preliminary form in a conference proceedings. Such papers must be submitted electronically and formatted just like papers submitted for full-text publication.

Results previously published or presented at another archival conference prior to this one, or published (or accepted for publication) at a journal prior to the submission deadline, can be submitted only if the author intends to publish the paper as a one-page abstract.

The proceedings of the conference will be published in the ACM Digital Library.

Among all papers, a best paper will be selected by the program committee and will be awarded the AI, People, and Society best paper award, sponsored by the Partnership on AI. The award is $1,000. Also, the winner will be able to participate in a global competition among several conferences, for a grand prize of $7,500.

A selected subset of the accepted papers will have the opportunity to be considered for journal publication in the JAIR special track on AI and Society (http://www.jair.org/specialtrack-aisoc-call.html).

Important dates:

Submission: October 31st, 2017
Notification: December 15th, 2017
Final version: March 1st, 2017

(Note: the final version due date is after the conference dates, to include feedback from the conference discussions).

Conference program co-chairs:

AI: Francesca Rossi, IBM Research and University of Padova
AI and workforce: Jason Furman, Harvard University
AI and philosophy: Huw Price, Cambridge University
AI and law: TBD

More information will be available soon on the conference web site.

National Press Club USACM Panel

Your Public Policy Officer attended the USACM Panel on Algorithmic Transparency and Accountability on Thursday, Sept 14th at the National Press Club. The panelists were moderator Simson Garfinkel, Jeanna Neefe Matthews, Nicholas Diakopoulos, Dan Rubins, Geoff Cohen, and Ansgar Koene. USACM Chair Stuart Shapiro opened the event, and Ben Sneiderman provided comments from the audience.

USACM and EUACM have identified and codified a set of principles intended to ensure fairness in this evolving policy and technology ecosystem. These were a focus of the panel discussion and are as follows:(1) awareness;
(2) access and redress;
(3) accountability;
(4) explanation;
(5) data provenance;
(6) audit-ability; and
(7) validation and testing.
See also the full letter in the September, 2017, issue of CACM.

The panel and audience discussion ranged from frameworks for evaluating algorithms and creating policy for fairness to examples of algorithmic abuse. Language for clear communication with the public and policymakers, as well as even scientists, was a concern — as has been discussed in our Public Policy blog.  Algorithms in the strict sense may not always be the issue, but rather the data used to build and train a system, especially when the system is used for prediction and decision making. Much was said about the types of bias and unfairness that can be embedded in modern AI and machine learning systems. The essence of the concerns includes the ability to explain how a system works, the need to develop models of algorithmic transparency, and how policy or an independent clearinghouse might identify fair and problematic algorithmic systems.

Please read more about the panel discussion at https://www.acm.org/public-policy/algorithmic-panel
and
watch the very informative YouTube video of the panel at https://www.youtube.com/watch?v=DDW-nM8idgg&feature=youtu.be

September Policy Events

Please note AI policy issues getting national attention. Look for replays and videos if you cannot attend or view live events.

Artificial Intelligence, Automation, and Jobs Panelists at the Technology Policy Institute’s 2017 Aspen Forum talk about the impact of artificial intelligence and automation on jobs. Speakers included authors and educators, Google’s chief economist, and a Microsoft AI research specialist. C-SPAN 1 Program ID: 432196-2
Airing Details • Sep 03, 2017 | 12:47pm EDT | C-SPAN 1 • Sep 04, 2017 | 10:19pm EDT |

Experts to Explore Far-Reaching Impact of Algorithms on Society and Best Strategies to Prevent Algorithmic Bias.
USACM will be hosting a panel event on algorithmic transparency and accountability on Thursday, September 14 from 9am to 10:30am at the National Press Club in Washington, DC.  Experts Ansgar Koene (University of Nottingham), Dan Rubins (Legal Robot), Geoff A. Cohen (Stroz Friedberg), Jeanna Matthews (Clarkson University), and Nicholas Diakopoulos (Northwestern University) will be discussing the impact of algorithmic decision-making in society and the technical underpinnings of algorithmic models. The panel will be moderated by Simson Garfinkel, Co-chair of USACM’s Working Group on Algorithmic Transparency and Accountability. https://www.acm.org/media-center/2017/august/usacm-ata-panel-media-advisory 

AI Matters Interview: Getting to Know Maja Mataric

AI Matters Interview with Maja Mataric

Welcome!  This month we interview Maja Mataric, Vice Dean for Research and the Director of the Robotics and Autonomous Systems Center at the University of Southern California.

Maja Mataric’s Bio

Maja Mataric named as one 10 up-and-coming LA innovators to watch

Maja Matarić is professor and Chan Soon-Shiong chair in Computer Science Department, Neuroscience Program, and the Department of Pediatrics at the University of Southern California, founding director of the USC Robotics and Autonomous Systems Center (RASC), co-director of the USC Robotics Research Lab and Vice Dean for Research in the USC Viterbi School of Engineering. She received her PhD in Computer Science and Artificial Intelligence from MIT in 1994, MS in Computer Science from MIT in 1990, and BS in Computer Science from the University of Kansas in 1987.

How did you become interested in robotics and AI?

When I moved to the US in my teens, my uncle wisely advised me that “computers are the future” and that I should study computer science. But I was always interested in human behavior. So AI was the natural combination of the two, but I really wanted to see behavior in the real world, and that is what robotics is about. Now that is especially interesting as we can study the interaction between people and robots, my area of research focus.

Do you have any suggestions for people interested in doing outreach to K-12 students or the general public?

Getting involved with K-12 students in incredibly rewarding! I do a huge amount of K-12 outreach, including students, teachers, and families. I find the best way to do so is by including my PhD students and undergraduates, who are naturally more relatable to the K-12 students: I always have them say what “grade” they are in and how much more fun “school” is once they get to do research. The other key parts to outreach include letting the audience do more than observe: the audience should get involved, touch, and ask questions. And finally, the audience should get to take something home, such as concrete links to more information and accessible and affordable activities so the outreach experience is not just a one-off. Above all, I think it’s critical to convey that STEM is changing on almost a daily basis, that everyone can do it, and that whoever gets into it can shape its future and with it, the future of society.

How do you think robotics or AI researchers in academia should best connect to industry?

Recently connections to industry have become especially pressing in robotics, which has gone, during my career so far, from being a small area of specialization to being a massive and booming area of employment opportunity and huge technology leaps. This means undergraduate and graduate students need to be trained in latest and most relevant skills and methods, and all students need to be inspired and empowered to pursue skills and careers in these areas, not just those who self-select as their most obvious path; we have to proactively work on diversity and inclusion as these are clearly articulated needs by industry. There are great models of companies that have strong outreach to researchers, such as Microsoft and Google to name two, both holding annual faculty research summits and having grant opportunities for faculty to connect with their research and business units. As in all contexts, it is best to develop personal relationships with contacts at relevant companies, as they tend to lead to most meaningful collaborations.

What was your most difficult professional decision and why?

It’s hard to pick one, but here are, briefly, three that are interesting: 1) I had to actively choose whether to speak up against unfair treatment when I was still pre-tenure and in a very under-repreresented group, or to stay silent and not make waves. I spoke up and never regretted being true to myself. 2) I had to choose whether to take part of my time away from research to get involved and stay involved in academic administration. I chose to do so, but also chose to never let it take more than the official half time, and never stomp on my research. 3) I had to choose whether to leave academia for a startup or industry. These days, that is an increasingly complex choice, but as long as academia allows us to explore and experiment, it will remain the best choice.

What professional achievement are you most proud of?

The successes of my students and of my research field. Seeing my PhD students receive presidential awards while having balanced lives with families and still responding to my emails just makes me beam with pride. Pioneering a field, socially assistive robotics, that focuses on helping users with special needs, from those with autism to those with Alzheimer’s, to reach their potential. Seeing that field become established and grow from the enthusiasm of wonderful students and young researchers is an unparalleled source of professional satisfaction.

What do you wish you had known as a Ph.D. student or early researcher?

Nobody, no matter how senior or famous, knows how things are going to work out and how much another person can achieve. So when receiving advice, believe encouragement and utterly ignore discouragement. I am fortunate to be very stubborn by nature, but it was still a hard lesson and I see too many young people taking advice too seriously; it’s good to get advice but take it with a grain of salt: keep pushing for what you enjoy and believe in, even if it makes some waves and raises some eyebrows.

What would you have chosen as your career if you hadn’t gone into robotics?

I think about that when I talk to K-12 students; I try to tell them that it is fine to have a meandering path. I finally understand that what really fascinates me is people and what makes us tick. I could have studied that from various perspectives, including medicine, psychology, neuroscience, anthropology, economics, history… but since I was advised (by my uncle, see above) to go into computer science, I found a way to connect those paths. It’s almost arbitrary but it turned out to be lucky, as I love what I do.

What is a “typical” day like for you?

I have no typical day, they are all crazy in enjoyable ways. I prefer to spend my time in face-to-face interactions with people, and there are so many to collaborate with, from PhD students and undergraduate students, to research colleagues, to dean’s office colleagues, to neighbors on my floor and around my lab, to K-12 students we host. It’s all about people. And sure, there is a lot of on-line work, too, too much of it given how much less satisfying it is compared to human-human interactions, but we have to read, review, evaluate, recommend, rank, approve, certify, link, purchase, pay, etc.

What is the most interesting project you are currently involved with?

Since I got involved with socially assistive robotics, I truly love all my research projects: we are working with children with autism, with reducing pain in hospital patients, and addressing anxiety, loneliness and isolation in the elderly. I share with my students the curiosity to try new things and enjoy the opportunity to do so collaborative and often in a very interdisciplinary way, so there is never a shortage of new things to discover, learn, and overcome, and, hopefully, to do some good.

How do you balance being involved in so many different aspects of the robotics and AI communities?

With daily difficult choices: it’s an hourly struggle to focus on what is most important, set the rest aside, and then get back to enough of it but not all of it and, above all, to know what is in what category. I find that my family provides an anchoring balance that helps greatly with prioritizing.

What is your favorite CS or AI-related movie or book and why?

“Wall*E”: it’s a wonderfully human (vulnerable, caring, empathetic, idealistic) portrayal of a robot, one that has all the best of our qualities and none of the worst. After that, “Robot and Frank” and “Big Hero 6”.

Predictive Policing and Beyond

In the August 1 post, I offered a more detailed view of “algorithm” in “Algorithmic Transparency”, particularly in some machine learning software. The example was about systems involving neural networks, where algorithms in the technical sense are likely not the cause of concern, but the data used to train the system could lead to policy issues. On the other hand, “predictive” algorithms in systems are potentially a problem and need to be transparent and explained. They are susceptible to unintentional — and intentional — human bias and misuse. Today’s post gives a particular example.

Predictive policing software, popular and useful in law enforcement offices, is particularly prone to issues of bias, accuracy, and misuse. The algorithms are written to determine propensity to commit a crime and where crime might occur. Policy concerns are related to skepticism about the efficacy and fairness of such systems, and thus accountability and transparency are very important.

As stated in Slate, “The Intercept published a set of documents from a two-day event in July hosted by the U.S. Immigration and Customs Enforcement’s Homeland Security Investigations division, where tech companies were invited to learn more about the kind of software ICE is looking to procure for its new ‘Extreme Vetting Initiative.’ According to the documents, ICE is in the market for a tool that it can use to predict the potential criminality of people who come into the country.” Further information on the Slate article is available here.

The AI community should help investigate algorithmic accountability and transparency in the case of predictive policing and the subsequent application of the algorithms to new areas. We should then discuss our SIGAI position and public policy.