Artificial Intelligence Journal: FUNDING OPPORTUNITIES for PROMOTING AI RESEARCH

Artificial Intelligence Journal:
FUNDING OPPORTUNITIES for PROMOTING AI RESEARCH
Deadline for proposals: extended to January 20th, 2018

The Artificial Intelligence Journal (AIJ) is one of the longest established and most respected journals in AI, and since it was founded in 1970, it has published many of the key papers in the field. The operation of the Editorial Board is supported financially through an arrangement with AIJ’s publisher, Elsevier. Through this arrangement, the AIJ editorial board is able to make available substantial funds (of the order of 230,000 Euros per annum), to support the promotion and dissemination of AI research. Most of these funds are made available through a series of competitive open calls (the remaining part of the budget is reserved for sponsorship of studentships for the annual IJCAI conference).

The current call has a deadline of January 20th, 2018 and a budget of 120,000 Euros.

Proposals should be submitted following the format and content guidelines, as well as submission instructions, that can be found on the AIJ web site:

http://aij.ijcai.org/index.php/funding-opportunities-for-promoting-ai-research

(We posted this call at a time when the above website had not yet been updated but it will soon be, hopefully by the time when you are reading this blog post. In the meantime, you can click here for the details.)

Interview with Ayanna Howard

Welcome!  This column is the fifth in our series profiling senior AI researchers. This month focuses on Dr. Ayanna Howard.  In addition to our interview, Dr. Howard was recently interviewed by NPR and they created an animated video about how “Being Different Helped A NASA Roboticist Achieve Her Dream.”

Ayanna Howard’s Bio

Ayanna Howard

Ayanna Howard, Ph.D. is Professor and Linda J. and Mark C. Smith Endowed Chair in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. As an educator, researcher, and innovator, Dr. Howard’s career focus is on intelligent technologies that must adapt to and function within a human-centered world. Her work, which encompasses advancements in artificial intelligence (AI), assistive technologies, and robotics, has resulted in over 200 peer-reviewed publications in a number of projects – from assistive robots in the home to AI-powered STEM apps for children with diverse learning needs. She has over 20 years of R&D experience covering a number of projects that have been supported by various agencies including: National Science Foundation, NewSchools Venture Fund, Procter and Gamble, NASA, and the Grammy Foundation. Dr. Howard received her B.S. in Engineering from Brown University, her M.S.E.E. from the University of Southern California, her M.B.A. from the Drucker Graduate School of Management, and her Ph.D. in Electrical Engineering from the University of Southern California. To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine, as well as being named a MIT Technology Review top young innovator and recognized as one of the 23 most powerful women engineers in the world by Business Insider. In 2013, she also founded Zyrobotics, which is currently licensing technology derived from her research and has released their first suite of STEM educational products to engage children of all abilities. From 1993-2005, Dr. Howard was at NASA’s Jet Propulsion Laboratory. She has also served as the Associate Director of Research for the Georgia Tech Institute for Robotics and Intelligent Machines and as Chair of the multidisciplinary Robotics Ph.D. program at Georgia Tech.

How did you become interested in Computer Science and AI?

I first became interested in robotics as a young, impressionable, middle school girl. My motivation was the television series called The Bionic Women – my goal in life, at that time, was to gain the skills necessary to build the bionic women. I figured that I had to acquire combined skill sets in engineering and computer science in order to accomplish that goal. With respect to AI, I became interested in AI after my junior year in college, when I was required to design my first neural network during my third NASA summer internship in 1992. I quickly saw that, if I could combine the power of AI with Robotics – I could enable the ambitious dreams of my youth.

 

What was your most difficult professional decision and why?

The most difficult professional decision I had to make, in the past, was to leave NASA and pursue robotics research as an academic. The primary place I’d worked at from 1990 until 2005 was at NASA. I’d grown over those 15 years in my technical job positions from summer intern to computer scientist (after college graduation) to information systems engineer, robotics researcher, and then senior robotics researcher. And then, I was faced with the realization that, in order to push my ambitious goals in robotics, I needed more freedom to pursue robotics applications outside of space exploration. The difficulty was, I still enjoyed the space robotics research efforts I was leading at NASA, but I also felt a need to expand beyond my intellectual comfort zone.

What professional achievement are you most proud of?

The professional achievement I am proudest of is founding of a startup company, Zyrobotics, which has commercialized educational products based on technology licensed from my lab at Georgia Tech. I’m most proud of this achievement because it allowed me to combine all of the hard-knock lessons I’ve learned in designing artificial intelligence algorithms, adaptive user interfaces, and human-robot interaction schemes with a real-world application that has large societal impact – that of engaging children of diverse abilities in STEM education, including coding.

What do you wish you had known as a Ph.D. student or early researcher?

As a Ph.D. student, I wish I had known that finding a social support group is just as important to your academic growth as finding an academic/research home.  I consider myself a fairly stubborn person – I consider words of discouragement a challenge to prove others wrong. But psychological death by a thousand cuts (i.e. words of negativism) is a reality for many early researchers.  A social support group helps to balance the negativism that others, sometimes unconsciously, subject others too.

What would you have chosen as your career if you hadn’t gone into CS?

If I hadn’t gone into the field of Robotics/AI, I would have chosen a career as a forensic scientist. I’ve always loved puzzles and in forensic science, as a career, I would have focused on solving life puzzles based on the physical evidence. The data doesn’t lie (although, as we know, you can bias the data so it seems to).

What is a “typical” day like for you?

Although I have no “typical” day – I can categorize my activities into five main buckets, in no priority order: 1) human-human interactions, 2) experiments and deployments, 3) writing (including emails), 4) life balance activities, and 5) thinking/research activities. Human-human interactions involve everything from meeting with my students to talking with special education teachers to one-on-one observations in the pediatric clinic. Experiments and deployments involve everything from running a participant study to evaluating the statistics associated with a study hypothesis. Writing involves reviewing my students’ publication drafts, writing proposals, and, of course, addressing email action items. Life-balance activities include achieving my daily exercise goals as well as ensuring I don’t miss any important family events. Finally thinking/research activities covers anything related to coding up a new algorithm, consulting with my company, or jotting down a new research concept on a scrap of paper.

What is the most interesting project you are currently involved with?

The most interesting project that I currently lead involves an investigation in developing robot therapy interventions for young children with motor disabilities. For this project, we have developed an interactive therapy game called SuperPop VR that requires children to play within a virtual environment based on a therapist-designed protocol. A robot playmate interacts with each child during game play and provides both corrective and motivational feedback. An example of corrective feedback is when the robot physically shows the child how to interact with the game at the correct movement speed (as compared to a normative data profile). An example of motivational feedback is when the robot, through social interaction, encourages the child when they have accomplished their therapy exercise goal. We’ve currently deployed the system in pilot studies with children with Cerebral Palsy and have shown positive changes with respect to their kinematic outcome metrics. We’re pushing the state-of-the-art in this space by incorporating additional factors for enhancing the long-term engagement through adaptation of both the therapy protocol as well as the robot behaviors.

How do you balance being involved in so many different aspects of the AI community?

In order for me to become involved in any new AI initiative and still maintain a healthy work-life balance, I ask myself – Is this initiative something that’s important to me and aligned with my value system; Can I provide a unique perspective to this initiative that would help to make a difference; Is it as important or more important than other initiatives I’m involved in; and Is there a current activity that I can replace so I have time to commit to the initiative now or in the near-future. If the answer is yes to all those questions, then I’m usually able to find an optimal balance of involvement in the different AI initiatives of interest.

What is your favorite CS or AI-related movie or book and why?

My favorite AI-related movie is the Matrix. What fascinates me about the Matrix is the symbiotic relationship that exists between humans and intelligent agents (both virtual and physical).  One entity can not seem to exist without the other. And operating in the physical world is much more difficult than operating in the virtual, although most agents don’t realize that difference until they accept the decision to navigate in both types of worlds.

Recent and Current Events: CRA and IEEE

December is a busy month for AI Policy activities. This blog post is a summary of the important topics in which SIGAI members are involved. Subsequent Policy blog posts will cover these in more detail.  Meanwhile, we encourage you to read the information in this post and participate in the IEEE Standards Association December 18th online event on Policy for Artificial Intelligence.

Computing Research Association December 12, 2017
Summit on Technology and Jobs

The summit co-sponsors included ACM and ACM SIGAI. The overview is as follows:
“The goal of the summit was to put the issue of technology and jobs on the national agenda in an informed and deliberate manner. The summit brought together leading technologists, economists, and policy experts who offered their views on where technology is headed and what its impact may be, and on policy issues raised by these projections and possible policy responses. The summit was hosted by the Computing Research Association, as part of its mission to engage the computing research community to provide trusted, non-partisan input to policy thinkers and makers.”

I attended and will be writing about this important issue in the January 1 post. Please look at the livestream of the sessions at
https://livestream.com/accounts/11031579/events/7936961/videos/167138978
https://livestream.com/accounts/11031579/events/7936961/videos/167149704
https://livestream.com/accounts/11031579/events/7936961/videos/167155909

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

As reported in previous posts, members of SIGAI and USACM have been working closely with IEEE colleagues on ethics and policy issues.

The Global Initiative was launched in April of 2016 to move beyond the paranoia and the uncritical admiration regarding autonomous and intelligent technologies and to illustrate that aligning technology development and use with ethical values will help advance innovation while diminishing fear in the process. The goal of The IEEE Global Initiative is “to incorporate ethical aspects of human well-being that may not automatically be considered in the current design and manufacture of A/IS technologies and to reframe the notion of success so human progress can include the intentional prioritization of individual, community, and societal ethical values.”

The goal of the Global Initiative is “to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educatedtrained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.”

Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (A/IS) encourages technologists to prioritize ethical considerations in the creation of A/IS systems. EADv2 is being released as a Request For Input.  Details on how to submit public comments are available via The Initiative’s Submission Guidelines.

Download here: EADv2

Policy for Artificial Intelligence: The Power of Imaginaries

IEEE Standards Association (IEEE-SA) will present the third in a series of three free online events focused on Policy for Artificial Intelligence on December 18, 2017, at 12:00 p.m. EST

Policy for Artificial Intelligence: The Power of Imaginaries, will feature Konstantinos Karachalios (Managing Director, IEEE-SA; Member of IEEE Management Council), Nicolas Miailhe (Co-Founder and President, The Future Society; Harvard Kennedy School, Senior Visiting Fellow, Program on Science Technology and Society and member, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and Cyrus Hodes, Director of the AI Initiative with The Future Society at Harvard Kennedy School. John C. Havens, Executive Director, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, will moderate.

IEEE-SA: “Imaginaries are, ‘collectively held, institutionally stabilized, and publicly performed visions of a desirable future, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology’ (Jasanoff & Kim; from Dreamscapes of Modernity).   If we want to have a positive future in regards to AI, we have to critically reflect upon our current imaginary in order to ‘imagine’ a new one, and the policy and principles we need to attain it.”
REGISTER TODAY

Call for Nominations: ACM SIGAI Autonomous Agents Research Award 2018

Nominations are solicited for the 2018 ACM SIGAI Autonomous Agents Research Award. This award is made for excellence in research in the area of autonomous agents. It is intended to recognize researchers in autonomous agents whose current work is an important influence on the field. The award is an official ACM award, funded by an endowment created by ACM SIGAI from the proceeds of previous Autonomous Agents conferences. The recipient of the award will receive a monetary prize and a certificate, and will be invited to present a plenary talk at the AAMAS 2018 conference in Stockholm, Sweden.
Previous winners of the ACM SIGAI Autonomous Agents Research Award are: David Parkes (2017), Peter Stone (2016), Catherine Pelachaud (2015), Michael Wellman (2014), Jeff Rosenschein (2013), Moshe Tennenholtz (2012), Joe Halpern (2011), Jonathan Gratch and Stacy Marsella (2010), Manuela Veloso (2009), Yoav Shoham (2008), Sarit Kraus (2007), Michael Wooldridge (2006), Milind Tambe (2005), Makoto Yokoo (2004), Nicholas R. Jennings (2003), Katia Sycara (2002), and Tuomas Sandholm (2001). For more information on the award, see the Autonomous Agents Research Award page.
How to nominate
Anyone can make a nomination. Nominations should be made by email to the chair of the award committee, Jeff Rosenschein (jeff@cs.huji.ac.il), and should consist of a short (< 1 page) statement that emphasizes not only the research contributions that the individual has made that merit the award but also how the individual’s current work is an important influence on the field.
NOTE: a candidate can only be considered for the award if they are explicitly nominated. If you believe that someone deserves the award, then NOMINATE THEM — don’t assume that somebody else will!
Important dates
  • 17 January 2018 — Deadline for nominations
  • 7 February 2018 — Announcement of 2017 winner
  • 10-15 July 2018 — AAMAS-2018 conference in Stockholm

News from AAAI FSS-17

This year’s Fall Symposium Series (November 9-11) provided updates and insights on advances in research and technology, including resources for discussion of AI policy issues.  The symposia addressed topics in human-robot interaction, cognitive assistance in government and public sectors, military applications, human-robot collaboration, and a standard model of the mind. An important theme for public policy was the advances and questions on human-AI collaboration.

The cognitive assistance sessions this year focused on government and public sector applications, particularly autonomous systems, healthcare, and education. Human-technology collaboration advances involved discussions of issues relevant to public policy, including privacy and algorithmic transparency. The increasing mix of AI with humans in ubiquitous public and private systems was the subject of discussions about new technological developments and the need for understanding and anticipating challenges for communication and collaboration. Particular issues were on jobs and de-skilling of the workforce, credit and blame when AI applications work or fail, and the role of humans with autonomous systems.

IBM’s Jim Spohrer made an outstanding presentation “A Look Toward the Future”, incorporating his rich experience and current work on anticipated impacts of new technology. His slides are well worth studying, especially for the role of hardware in game-changing technologies with likely milestones every ten years through 2045. Radical developments in technology would challenge public policy in ways that are difficult to imagine, but current policymakers and the AI community need to try.

Particular takeaways, and anticipated subjects for future blogs, are about the importance of likely far-reaching research and applications on public policy. The degree and nature of cognitive collaboration with machines, the future of jobs, new demands on educational systems as cognitive assistance becomes deep and pervasive, and the anticipated radical changes in AI capabilities put the challenges to public policy in a new perspective. AI researchers and developers need to partner with social scientists to anticipate communication and societal issues as human-machine collaboration accelerates, both in system development teams and in the new workforce.

Some recommended topics for thinking about AI technology and policy are the following:
Jim Spohrer’s slideshare
Noriko Arai’s TED talk on Todai Robot
Humans, Robotics, and the Future of Manufacturing
New education systems and the future of work
Computing education: Coding vs. learning to use systems
Smart phone app “Seeing AI
AAAI for information related to science policy issues.

Public Policy Opportunities

USACM Council
The membership of USACM will be voting soon to elect at-large representatives to the USACM Council, with terms starting January 1st. At-large Council members whose terms expire this December 31st are Jean Camp, Simson Garfinkel, and Jonathan Smith. If you are a member of USACM and are interested in serving on USACM Council, please contact a member of the nominations committee. If there is someone is in line with what you think USACM should be doing, then please nominate that person. Only those who have been USACM members for at least one year as of January 1, 2018, are eligible. The deadline for having a slate of candidates is November 13th.

ACM Policy Award
Consider nominating someone for this award, which is made in alternate years and the initial one is yet to be made because insufficient nominations were received the first time around. “The ACM Policy Award was established in 2014 to recognize an individual or small group that had a significant positive impact on the formation or execution of public policy affecting computing or the computing community. This can be for education, service, or leadership in a technology position; for establishing an innovative program in policy education or advice; for building the community or community resources in technology policy; or other notable policy activity. The award is accompanied by a $10,000 prize.” Further information and instructions are available at http://awards.acm.org/policy/nominations.
The award can recognize one or more of the following:
– Contributions to policy while working in a policy position
– Distinguished service on and contributions to policy issues
– Advanced scholarly work that has impacted policy
The deadline for nominations is January 15, 2018.

Missed Opportunities — Federal Science Policy Offices
I reached out to people who might know of prospects for the current Administration to make important policy position appointments.
Not much to report:
1. The Administration has yet to nominate a Director for the White House Office of Science and Technology Policy (OSTP). OSTP director traditionally serves as the president’s science adviser.
2. Office of the Chief Technology Officer is also vacant. In the past, the CTO team helps shape Federal policies, initiatives, capacity, and investments that support the mission of harnessing the power of technology. They have also worked to anticipate and guard against the consequences that can accompany new discoveries and technologies.
3. The U.S. Department of Agriculture’s chief scientist nominee, Sam Clovis, recently withdrew his name from consideration. Clovis is a climate change denier with no training in science, food, or agriculture. For months, scientists, activists, and a broad coalition of groups have come together to demand that the Senate reject his nomination.

AAAS Policy News
For timely and objective information on current science and technology issues and assistance in understanding Federal science policy, check with the AAAS Office of Government Relations at https://www.aaas.org/program/govrelations
and the AAAS Policy and Public Statements at https://www.aaas.org/about/policy-and-public-statements.

Is it too late to address the moral , ethical, and economic issues introduced by the commercialization of AI?

What do recent deployments of AI mean to the public or the average citizen? Will AI be a transparent technology, invisible at the public policy level? Is it too late to address the moral , ethical, and economic issues introduced by the commercialization of AI?

On September 14, 2017 the NEOACM (Northeast Ohio ACM) Professional chapter held the “We come in peace 2” AI panel hosted by the McDonough Museum of Fine Art in Youngstown Ohio. The members of the panel were: Doug McCollough: CIO of Dublin Ohio, Dr. Shiqi Zhang: AI and Robotics Reseacher at Cleveland State University, Andrew Konya: Co-founder & CEO of Remesh, a Cleveland-based AI company,Dr. Jay Ramanathan: Executive Director of Arthapedia.zone, Paul Carlson: Intelligent Community Strategist for Columbus Ohio and Dr. Mark Vopat: Professor of Political Philosophy, Applied Ethics at Youngstown State University. Our moderator was Nikola Danaylov, author of the best selling book “Conversations with Future: 21 Visions for the 21st century”.

The goal of the panel was to was discuss the latent consequences both positive and negative of recent AI based technologies that are being deployed and reach the general public. The scope of the goal ranged from the ethics and policy that must be considered as smart cities are brought on line to the impact of robotics and decision making technologies in law enforcement. The panel visited such diverse subject matter as Cognitive Computing to Agent Belief. While the focus originally started out on AI deployments in cities in the state of Ohio, it became clear that most of the issues where universal in nature. The panel started at 6:00 p.m. EDT and it was just getting warmed up when we had to bring it to a close at 8:00 p.m. EDT. There just wasn’t time to get to all of the questions, or to do justice to all of the issues and topics that were introduced during the panel. There was a burning desire to continue the conversation and debate. So after a discussion with some of our fellow ACM members at SIGAI and the AI panelists we’ve decided to carry over some of that discussion to an AI-Matters blog in hopes that we could engage the broader AI community as well as have a more flexible format that would give us ample time and space. Some of the highlights for the AI Panel can be found at:

2017 AI Panel “We come in peace”

The plan is to tackle some of the subject matter in this blog and to handle other aspects of the subject matter in webinar form. We hope that our fellow SIGAI members will feel free to contribute to this conversation as it develops providing questions, insights, suggestions, and direction. The moderator Nikola Danaylov and the panelists have all agreed to participate in this blog so if this blog goes anything like the panel discussion, “hold on to your seats”! We want to dive into the questions such as what does this recent incarnation of “Artificial Intelligence” mean to the public or for the average citizen? What impact will it have on infrastructure and the economy? From a commercialization perspective has “AI” been displaced by machine learning and data science? If AI and machine learning become transparent technologies will it be possible to regulate their impact on society? Is it already too late to stop any potential negative impact of AI based technologies? And I for one am looking forward to a continuation of the discussion of just what constitutes agent beliefs, where they come from, and how will agent belief systems be dealt with at the public policy or commercialization level. And then again perhaps even these are the wrong questions to be asking if our concern is the public good. We hope you join us as we attempt to deal with these questions and more.

Cheers

Cameron Hughes
Current Chair NEOACM Professional Chapter
SIGAI Member

Joint Panel of ACM and IEEE

The new joint ACM/IEEE group met recently via conference calls to explore the idea of proposing a session at the 2018 RightsCon in Toronto on a topic of mutual interest to the two organizations’ ethics and policy members. Your SIGAI members Simson Garfinkel, Sven Koenig, Nick Mattei, and Larry Medsker are participating in the group. Stuart Shapiro, Chair of ACM US Public Policy Council, is representing ACM. Members from IEE include John C. Havens, Executive Director of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and Dr. Ansgar Koene, University of Nottingham and working group chair for IEEE Standard on Algorithm Bias Considerations.

The group meets again soon to propose a panel in the area of  bias and algorithmic accountability. SIGAI members are welcome to nominate panel members and volunteer. SIGAI members are also encouraged to contribute ideas that could focus the discussion and meet the following RightsCon goals:
– including speakers from a diverse range of backgrounds
– addressing an important challenge to human rights in the digital age
– engaging participants in a way that inspires real-world outcomes
(e.g., new policy approaches and innovative technology solutions)
– introducing new voices, new concepts, and fresh take on an issue
– having the potential to encourage cross-sector collaborations
– using an innovative format to present the idea and generate outcomes

The call for proposals mentions “Artificial Intelligence, Automation, and Algorithmic Accountability” as one of their program “buckets”. RightsCon is accepting presentation proposals until November 24, 2017. Sessions will have 16 program buckets, which cover topics including Digital Security and Encryption, Artificial Intelligence, Automation, Algorithmic Accountability, Misinformation, Journalism, and the Future of Online Media.

Computing Community Consortium

On October 23-24, 2017, the Computing Community Consortium (CCC) will hold the Computing Research: Addressing National Priorities and Societal Needs Symposium to address the current and future contribution of computing and its role in issues of societal needs.

Computing Community Consortium says it “has hosted dozens of research visioning workshops to imagine, discuss, and debate the future of computing and its role in addressing societal needs. The second CCC Computing Research symposium draws these topics into a program designed to illuminate current and future trends in computing and the potential for computing to address national challenges.”

You may also want to check out the CCC Blog at http://www.cccblog.org/ for policy issues of common interest for SIGAI members.

IEEE and ACM Collaborations on ATA

At last month’s USACM Panel at the National Press Club (reported in the AI Matters policy blog last time), I had the opportunity to talk with one of the panelists Dr. Ansgar Koene, Senior Research Fellow: UnBias, CaSMa & Horizon Policy Impact. Ansgar is at the Horizon Digital Economy Research Institute, University of Nottingham, and he is the working group chair for IEEE Standard on Algorithm Bias Considerations. Be sure to see Ansgar’s article about the ‘AI gaydar’ in Conversation: https://theconversation.com/machine-gaydar-ai-is-reinforcing-stereotypes-that-liberal-societies-are-trying-to-get-rid-of-83837.

Following the USACM Panel at the National Press Club, attendees discussed ways to bring together the voices of ACM and IEEE on Algorithmic Transparency and Accountability. One opportunity is at  RightsCon Toronto: May 16-18, 2018. The call for proposals mentions “Artificial Intelligence, Automation, and Algorithmic Accountability” as one of their program “buckets”. RightsCon is accepting proposals for presentations until November 24, 2017. Sessions will have 16 program buckets, which cover topics including Digital Security and Encryption, Artificial Intelligence, Automation, and Algorithmic Accountability to Misinformation, Journalism, and the Future of Online Media.

A new initiative is Local Champions at RightsCon Toronto, which features leading voices in Canada’s digital rights landscape. They plan to support thought leadership, program guidance, and topic identification to ensure that the most pressing issues are represented at RightsCon.

Dr. Koene also shared information about the IEEE P7001 Working Group on the IEEE Standard on Transparency of Autonomous Systems http://sites.ieee.org/sagroups-7001/. This working group is chaired by Prof. Alan Winfield who is also very interested in the idea of data recorders, like airplane ‘black boxes’, to provide insight into behavior of autonomous vehicles for accident investigation. http://www.cems.uwe.ac.uk/~a-winfield/

Please share additional opportunities for SIGAI members to join with other groups working on issues in algorithmic transparency and accountability. We welcome also your comments on the many AI applications and technologies that should be included in our focus on public policy.

New Conference: AAAI/ACM Conference on AI, Ethics, and Society

ACM SIGAI is pleased to announce the launch of the AAAI/ACM Conference on AI, Ethics, and Society, to be co-located with AAAI-18, February 2-3, 2018 in New Orleans. The Call for Papers is included below and is also available at  http://www.aies-conference.com/. Please note the October 31 deadline for submissions.

We hope to see you at the new conference in New Orleans next February!
************************

AAAI/ACM Conference on AI, Ethics, and Society
February 2-3, 2018
New Orleans, USA

http://www.aies-conference.com/

As AI is becoming more pervasive in our life, its impact on society is more significant and concerns and issues are raised regarding aspects such as value alignment, data bias and data policy, regulations, and workforce displacement. Only a multi-disciplinary and multi-stakeholder effort can find the best ways to address these concerns, including experts of various disciplines, such as AI, computer science, ethics, philosophy, economics, sociology, psychology, law, history, and politics. In order to address these issues in a scientific context, AAAI and ACM have joined forces to start a new conference, the AAAI/ACM Conference on AI, Ethics, and Society.

The first edition of this conference will be co-located with AAAI-18 on February 2-3, 2018 in New Orleans, USA. The program of the conference will include peer-reviewed paper presentations, invited talks, panels, and working sessions.

The conference welcomes contributions on a broad set of topics, included the following ones:

  • Building ethical AI systems
  • Value alignment
  • Moral machine decision making
  • Trust and explanations in AI systems
  • Fairness and Transparency in AI systems
  • Ethical design and development of AI systems
  • AI for social good
  • Human-level AI
  • Controlling AI
  • Impact of AI on workforce
  • Societal impact of AI
  • AI and law

Submitted papers should adopt a scientific approach to address any questions related to the above topics. Moreover, they should clearly establish the research contribution, its relevance, and its relation to prior research. All submissions must be made in the appropriate format, and within the specified length limit; details and a LaTeX template can be found at the conference web site.

We solicit papers (pdf file) of up to 6 pages + 1 page for references (AAAI format), submitted through the Easychair system.

We expect papers submitted by researchers of several disciplines (AI, computer science, philosophy, economics, law, and others). The program committee includes members that are experts in all the relevant areas, to ensure appropriate review of papers.

IMPORTANT NOTICE: To accommodate the publishing traditions of different fields, authors of accepted papers can ask that only a one-page abstract of the paper appear in the proceedings, along with a URL pointing to the full paper. Authors should guarantee the link to be reliable for at least two years. This option is available to accommodate subsequent publication in journals that would not consider results that have been published in preliminary form in a conference proceedings. Such papers must be submitted electronically and formatted just like papers submitted for full-text publication.

Results previously published or presented at another archival conference prior to this one, or published (or accepted for publication) at a journal prior to the submission deadline, can be submitted only if the author intends to publish the paper as a one-page abstract.

The proceedings of the conference will be published in the ACM Digital Library.

Among all papers, a best paper will be selected by the program committee and will be awarded the AI, People, and Society best paper award, sponsored by the Partnership on AI. The award is $1,000. Also, the winner will be able to participate in a global competition among several conferences, for a grand prize of $7,500.

A selected subset of the accepted papers will have the opportunity to be considered for journal publication in the JAIR special track on AI and Society (http://www.jair.org/specialtrack-aisoc-call.html).

Important dates:

Submission: October 31st, 2017
Notification: December 15th, 2017
Final version: March 1st, 2017

(Note: the final version due date is after the conference dates, to include feedback from the conference discussions).

Conference program co-chairs:

AI: Francesca Rossi, IBM Research and University of Padova
AI and workforce: Jason Furman, Harvard University
AI and philosophy: Huw Price, Cambridge University
AI and law: TBD

More information will be available soon on the conference web site.