Call for Proposals: Artificial Intelligence Activities Fund

ACM SIGAI invites funding proposals for artificial intelligence (AI) activities with a strong outreach component to either students, researchers, or practitioners not working on AI technologies or to the public in general.

The purpose of this call is to promote a better understanding of current AI technologies, including their strengths and limitations, as well as their promise for the future. Examples of fundable activities include (but are not limited to) AI technology exhibits or exhibitions, holding meetings with panels on AI technology (including on AI ethics) with expert speakers, creating podcasts or short films on AI technologies that are accessible to the public, and holding AI programming competitions. ACM SIGAI will look for evidence that the information presented by the activity will be of high quality, accurate, unbiased (for example, not influenced by company interests), and at the right level for the intended audience.

ACM SIGAI has set aside $10,000 to provide grants of up to $2,000 each, with priority given to a) proposals from ACM affiliated organizations other than conferences (such as ACM SIGAI chapter or ACM chapters), b) out-of-the-box ideas, c) new activities (rather than existing and recurring activities), d) activities with long-term impact, e) activities that reach many people, and f) activities co-funded by others. We prefer not to fund activities for which sufficient funding is already available from elsewhere or that result in profit for the organizers. Note that expert talks on AI technology can typically be arranged with financial support of the ACM Distinguished Speaker program (https://speakers.acm.org/) and then are not appropriate for funding via this call.

A proposal should contain the following information on at most 3 pages:

  • a description of the activity (including when and where it will be held);
  • a budget for the activity and the amount of funding requested, and whether other organizations have been or will be approached for funding (and, if so, for how much);
  • an explanation of how the activity fits this call (including whether it is new or recurring, which audience it will benefit, and how large the audience is);
  • a description of the organizers and other participants (such as speakers) involved in the activity (including their expertise and their affiliation with ACM SIGAI or ACM);
  • a description of what will happen to the surplus in case there is, unexpectedly, one; and
  • the name, affiliation, and contact details (including postal and email address, phone number, and URL) of the corresponding organizer.

Grantees are required to submit reports to ACM SIGAI following completion of their activities with details on how they utilized the funds and other information which might also be published in the ACM SIGAI newsletter “AI Matters.”

The deadline for submissions is 11:59pm on March 15, 2019 (UTC-12). Proposals should be submitted as pdf documents in any style at

https://easychair.org/conferences/?conf=sigaiaaf2019.

The funding decisions of ACM SIGAI are final and cannot be appealed. Some funding earmarked for this call might not be awarded at the discretion of ACM SIGAI, for example, in case the number of high-quality proposals is not sufficiently large. In case of questions, please first check the ACM SIGAI blog for announcements and clarifications: https://sigai.acm.org/aimatters/blog/. Questions should be directed to Sven Koenig (skoenig@usc.edu).

ACM and ACM SIGAI

ACM brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. As the world’s largest computing society, ACM strengthens the profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM’s reach extends to every part of the globe, with more than half of its 100,000 members residing outside the U.S.  Its growing membership has led to Councils in Europe, India, and China, fostering networking opportunities that strengthen ties within and across countries and technical communities. Their actions enhance ACM’s ability to raise awareness of computing’s important technical, educational, and social issues around the world. See https://www.acm.org/ for more information.

ACM SIGAI brings together academic and industrial researchers, practitioners, software developers, end users, and students who are interested in AI. It promotes and supports the growth and application of AI principles and techniques throughout computing, sponsors or co-sponsors AI-related conferences, organizes the Career Network and Conference for early-stage AI researchers, sponsors recognized AI awards, supports AI journals, provides scholarships to its student members to attend conferences, and promotes AI education and publications through various forums and the ACM digital library. See https://sigai.acm.org/ for more information.

Sven Koenig, ACM SIGAI chair
Sanmay Das, ACM SIGAI vice-chair
Rosemary Paradis, ACM SIGAI secretary/treasurer
Michael Rovatsos, ACM SIGAI conference coordination officer
Nicholas Mattei, ACM SIGAI AI and society officer

Follow the Data

The Ethical Machine — Big Ideas for Designing Fairer AI and Algorithms – is a “project that presents ideas to encourage a discussion about designing fairer algorithms” of the Shorenstein Center on Media, Politics, and Public Policy, Harvard Kennedy School. The November 27, 2018, publication is “Follow the Data! Algorithmic Transparency Starts with Data Transparency” by Julia Stoyanovich and Bill Howe. Their focus is local and municipal governments and NGOs that deliver vital human services in health, housing, and mobility. In the article, they give a welcome emphasis on the role of data instead of the common focus these days on just algorithms. They write, “data is used to customize generic algorithms for specific situations—that is to say that algorithms are trained using data. The same algorithm may exhibit radically different behavior—make different predictions; make a different number of mistakes and even different kinds of mistakes—when trained on two different data sets. In other words, without access to the training data, it is impossible to know how an algorithm would actually behave.” See their article for more discussion on designing systems for data transparency.

US and European Policy
Adam Eisgrau, ACM Director of Global Policy and Public Affairs, published an update on the ACM US and Europe Policy Committees in the November 29 ACM MemberNetKey points are

Interview with Kristian Kersting

This column is the sixth in our series profiling senior AI researchers. This month we interview Kristian Kersting, Professor in Computer Science and Deputy Director of the Centre for Cognitive Science at the Technical University of Darmstadt, Germany.

Kristian Kerting’s Bio

After receiving his Ph.D. from the University of Freiburg in 2006, he was with the MIT, Fraunhofer IAIS, the University of Bonn, and the TU Dortmund University. His main research interests are statistical relational artificial intelligence (AI), probabilistic deep programming, and machine learning. Kristian has published over 170 peer-reviewed technical papers and co-authored a book on statistical relational AI. He received the European Association for Artificial Intelligence (EurAI, formerly ECCAI) Dissertation Award 2006 for the best AI dissertation in Europe and two best-paper awards (ECML 2006, AIIDE 2015). He gave several tutorials at top AI conferences, co-chaired several international workshops, and cofounded the international workshop series on Statistical Relational AI (StarAI). He regularly serves on the PC (often at senior level) for several top conference and co-chaired the PC of ECML PKDD 2013 and UAI 2017. He is the Speciality Editor in Chief for Machine Learning and AI of Frontiers in Big Data, and is/was an action editor of TPAMI, JAIR, AIJ, DAMI, and MLJ.

When and how did you become interested in AI?

As a student, I was attending an AI course of Bernhard Nebel at the University of Freiburg. This was the first time I dived deep into AI. However, my interest in AI was probably triggered earlier. Around the age of 16, I think, I was reading about AI in some popular science magazines. I did not get all the details, but I was fascinated.

What professional achievement are you most proud of?

We were collaborating with biologists on understanding better how plants react to (a)biotic stress using machine learning to analyze hyperspectral images. We got quite encouraging results. The first submission to a journal, however, got rejected. As you can imagine, I was disappointed. One of the biologists from our team looked at me and said ”Kristian, do not worry, your research helped us a lot.” This made me proud. But also the joint work with Martin Mladenov on compressing linear and quadratic programs using fractional automorpishms. This provides optimization flags for ML and AI compilers. Turning them on makes the compilers attempt to reduce the solver costs, making ML and AI automatically faster.

What would you have chosen as your career if you hadn’t gone into CS?

Physics, I guess, but back then I did not see any other option than Computer Science.

What do you wish you had known as a Ph.D. student or early researcher?

That “sleep is for post-docs,” as Michael Littman once said.

Artificial Intelligence = Machine Learning. What’s wrong with this equation?

Machine Learning (ML) and Artificial Intelligence (AI) are indeed similar, but not quite the same. AI is about problem solving, reasoning, and learning in general. To keep it simple, if you can write a very clever program that shows intelligent behavior, it can be AI. But unless the program is automatically learned from data, it is not ML. The easiest way to think of their relationship is to visualize them as concentric circles with AI first and ML sitting inside (with deep learning fitting inside both), since ML also requires to write programs, namely, of the learning process. The crucial point is that they share the idea of using computation as the language for intelligent behavior.

As you experienced AI research and education in the US and in Europe, what are the biggest differences between the two systems and what can we learn from each other?

If you present a new idea, US people will usually respond with “Sounds great, let’s do it!”, while the typical German reply is “This won’t work because …”. Here, AI is no exception. It is much more critically received in Germany than in the US. However, this also provides research opportunities such as transparent, fair and explainable AI. Generally, over the past three decades, academia and industry have been converging philosophically and phys cally much more in the US than in Germany. This facilitate the transfer of AI knowledge via well-trained, constantly learning AI experts, who can then continuously create new ideas within the company/university and keep pace with the AI development. To foster AI research and education, the department structure and tenure-track system common in the US is beneficial. On the other hand, Germany is offering access to free higher education to all students, regardless of their origin. AI has no borders. We have to take it from the ivory towers and make it accessible for all.

What is the most interesting project you are currently involved with?

Deep learning has made striking advances in enabling computers to perform tasks like recognizing faces or objects, but it does not show the general, flexible intelligence that lets people solve problems without being specially trained to do so. Thus, it is time to boost its IQ. Currently, we are working on deep learning approaches based on sum-product networks and other arithmetic circuits that explicitly quantify uncertainty. Together with colleagues—also from the Centre of Cognitive Science—we combining the resulting probabilistic deep learning with probabilistic (logical) programming languages. If successful, this would be a big step forward in programming languages, machine learning and AI.

AI is grown up – it’s time to make use of it for good. Which real-world problem would you like to see solved by AI in the future?

Due to climate change, population growth and food security concerns the world has to seek more innovative approaches to protecting and improving crop yield. AI should play a major role here. Next to feeding a hungry world, AI should aim to help eradicate disease and poverty.

We currently observe many promising and exciting advances in using AI in education, going beyond automating Piazza answering, how should we make use of AI to teach AI?

AI can be seen as an expanding and evolving network of ideas, scholars, papers, codes and showcases. Can machines read this data? We should establish the “AI Genome”, a dataset, a knowledge base, an ongoing effort to learn and reason about AI problems, concepts, algorithms, and experiments. This would not only help to curate and personalize the learning experience but also to meet the challenges of reproducible AI research. It would make AI truly accessible for all.

What is your favorite AI-related movie or book and why?

“Ex Machina” because the Turing test is shaping its plot. It makes me think about current real-life systems that give the impression that they pass the test. However, I think AI is hard than many people think.

Pew Report on Attitudes Toward Algorithms

Pew Research Center just released a report Public Attitudes Toward Computer Algorithmsreported by Aaron Smith, on Americans’ concerns about fairness and effectiveness in making important decisions. The report says “This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual … the survey presented respondents with four different scenarios in which computers make decisions by collecting and analyzing large quantities of public and private data. Each of these scenarios were based on real-world examples of algorithmic decision-making … and included: a personal finance score used to offer consumers deals or discounts; a criminal risk assessment of people up for parole; an automated resume screening program for job applicants; and a computer-based analysis of job interviews. The survey also included questions about the content that users are exposed to on social media platforms as a way to gauge opinions of more consumer-facing algorithms.”
The report is available at http://www.pewinternet.org/2018/11/16/public-attitudes-toward-computer-algorithms/

Joint AAAI/ACM SIGAI Doctoral Dissertation Award

The Special Interest Group on Artificial Intelligence of the Association for Computing Machinery (ACM SIGAI) and the Association for the Advancement of Artificial Intelligence (AAAI) are happy to announce that they have established the Joint AAAI/ACM SIGAI Doctoral Dissertation Award to recognize and encourage superior research and writing by doctoral candidates in artificial intelligence. This annual award is presented at the AAAI Conference on Artificial Intelligence in the form of a certificate and is accompanied by the option to present the dissertation at the AAAI conference as well as to submit one 6-page summary for both the AAAI proceedings and the newsletter of ACM SIGAI. Up to two Honorable Mentions may also be awarded, also with the option to present their dissertations at the AAAI conference as well as submit one 6-page summary for both the AAAI proceedings and the newsletter of ACM SIGAI. The award will be presented for the first time at the AAAI conference in 2020 for dissertations that have been successfully defended (but not necessarily finalized) between October 1, 2018 and September 30, 2019. Nominations are welcome from any country, but only English language versions will be accepted. Only one nomination may be submitted per Ph.D. granting institution, including large universities. Dissertations will be reviewed for relevance to artificial intelligence, technical depth and significance of the research contribution, potential impact on theory and practice, and quality of presentation. The details of the nomination process will be announced in early 2019.

Legal AI

AI is impacting law and policy issues as both a tool and a subject area. Advances in AI provide tools for carrying out legal work in business and government, and the use of AI in all parts of society is creating new demands and challenges for the legal profession.

Lawyers and AI Tools

In a recent study, “20 top US corporate lawyers with decades of experience in corporate law and contract review were pitted against an AI. Their task was to spot issues in five Non-Disclosure Agreements (NDAs), which are a contractual basis for most business deals.” The LawGeex AI system attempted correct identification of basic legal principles in contracts The results suggest that AI systems can produce higher accuracy in shorter times compared to lawyers. As with other areas of AI applications, issues include trust in automation to make skilled legal decisions, safety in using AI systems, and impacts on the workforce of the future. For legal work, AI systems potentially reduce the time needed for high-volume and low-risk contracts and give lawyers more time to work on less mundane work. Policies should focus on automation where possible and safe, and the AI for legal work is another example of the need for collaborative roles for human and AI systems.

AI Impact on Litigation

The other side of tools and content is the emerging litigation in all parts of society from the use of AI. Understanding the nature of adaptive AI systems can be crucial for fact-finders and difficult to explain to non-experts. Smart policymaking needs to make clear the liability issues and ethics in cases involving the use of AI technology. Artificial Intelligence and the Role of Expert Witnesses in AI Litigation by Dani Alexis Ryskamp, writing for The Expert Institute,  discusses artificial intelligence in civil claims and the role of expert witnesses in elucidating the complexities of the technology in the context of litigation. “Over the past few decades, everything from motor vehicles to household appliances has become more complex and, in many cases, artificial intelligence only adds to that complexity. For end-users of AI products, determining what went wrong and whose negligence was responsible can be bafflingly complex. Experts retained in AI cases typically come from fields like computer or mechanical engineering, information systems, data analysis, robotics, and programming. They may specialize in questions surrounding hardware, software, 3D-printing, biomechanics, Bayesian logic, e-commerce, or other disciplines. The European Commission recently considered the question of whether to give legal status to certain robots. One of the issues weighed in the decision involved legal liability: if an AI-based robot or system, acting autonomously, injures a person, who is liable?” 

FTC Hearing on AI and Algorithms

FTC Hearing on AI and Algorithms: November 13 and 14 in Washington, DC

From the FTC:  The hearing will examine competition and consumer protection issues associated with the use of algorithms, artificial intelligence, and predictive analytics in business decisions and conduct. See detailed agenda. The record of that proceeding will be open until mid-February. To further its consideration of these issues, the agency seeks public comment on the questions, and it welcomes input on other related topics not specifically listed in the 25 questions.

Please send your thoughts to lrm@gwu.edu on what SIGAI might submit in response to the 25 specific questions posed by the Commission. See below. The hearing will inform the FTC, other policymakers, and the public of
* the current and potential uses of these technologies;
* the ethical and consumer protection issues that are associated with the use of these technologies;
* how the competitive dynamics of firm and industry conduct are affected by the use of these technologies; and
* policy, innovation, and market considerations associated with the use of these technologies.

25 specific questions posed by the FTC

Background on Algorithms, Artificial Intelligence, and Predictive Analytics, and Applications of the Technologies

  1. What features distinguish products or services that use algorithms, artificial intelligence, or predictive analytics? In which industries or business sectors are they most prevalent?
  2. What factors have facilitated the development or advancement of these technologies? What types of resources were involved (e.g., human capital, financial, other)?
  3. Are there factors that have impeded the development of these technologies? Are there factors that could impede further development of these technologies?
  4. What are the advantages and disadvantages for consumers and for businesses of utilizing products or services facilitated by algorithms, artificial intelligence, or predictive analytics?
  5. From a technical perspective, is it sometimes impossible to ascertain the basis for a result produced by these technologies? If so, what concerns does this raise?
  6. What are the advantages and disadvantages of developing technologies for which the basis for the results can or cannot be determined? What criteria should determine when a “black box” system is acceptable, or when a result should be explainable?

Common Principles and Ethics in the Development and Use of Algorithms, Artificial Intelligence, and Predictive Analytics

  1. What are the main ethical issues (e.g., susceptibility to bias) associated with these technologies? How are the relevant affected parties (e.g., technologists, the business community, government, consumer groups, etc.) proposing to address these ethical issues? What challenges might arise in addressing them?
  2. Are there ethical concerns raised by these technologies that are not also raised by traditional computer programming techniques or by human decision-making? Are the concerns raised by these technologies greater or less than those of traditional computer programming or human decision-making? Why or why not?
  3. Is industry self-regulation and government enforcement of existing laws sufficient to address concerns, or are new laws or regulations necessary?
  4. Should ethical guidelines and common principles be tailored to the type of technology involved, or should the goal be to develop one overarching set of best practices?

Consumer Protection Issues Related to Algorithms, Artificial Intelligence, and Predictive Analytics

  1. What are the main consumer protection issues raised by algorithms, artificial intelligence, and predictive analytics?
  2. How well do the FTC’s current enforcement tools, including the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act, address issues raised by these technologies?
  3. In recent years, the FTC has held public forums to examine the consumer protection questions raised by artificial intelligence as used in certain contexts (e.g., the 2017 FinTech Forum on artificial intelligence and blockchain and the 2011 Face Facts Forum on facial recognition technology). Since those events, have technological advancements, or the increased prevalence of certain technologies, raised new or increased consumer protection concerns?
  4. What roles should explainability, risk management, and human control play in the implementation of these technologies?
  5. What choices and notice should consumers have regarding the use of these technologies?
  6. What educational role should the FTC play with respect to these technologies? What would be most useful to consumers?

Competition Issues Related to Algorithms, Artificial Intelligence, and Predictive Analytics

  1. Does the use of algorithms, artificial intelligence, and predictive analytics currently raise particular antitrust concerns (including, but not limited to, concerns about algorithmic collusion)?
  2. What antitrust concerns could arise in the future with respect to these technologies?
  3. Is the current antitrust framework for analyzing mergers and conduct sufficient to address any competition issues that are associated with the use of these technologies? If not, why not, and how should the current legal framework be modified?
  4. To what degree do any antitrust concerns raised by these technologies depend on the industry or type of use?

Other Policy Questions

  1. How are these technologies affecting competition, innovation, and consumer choices in the industries and business sectors in which they are used today? How might they do so in the future?
  2. How quickly are these technologies advancing? What are the implications of that pace of technological development from a policy perspective?
  3. How can regulators meet legitimate regulatory goals that may be raised in connection with these technologies without unduly hindering competition or innovation?
  4. Are there tensions between consumer protection and competition policy with respect to these technologies? If so, what are they, and how should they be addressed?
  5. What responsibility does a company utilizing these technologies bear for consumer injury arising from its use of these technologies? Can current laws and regulations address such injuries? Why or why not?

Comments can be submitted online and should be submitted no later than February 15, 2019. If any entity has provided funding for research, analysis, or commentary that is included in a submitted public comment, such funding and its source should be identified on the first page of the comment.

Policy in the News

The Computing Community Consortium (CCC) announced a new initiative to create a Roadmap for Artificial Intelligence. SIGAI’s Yolanda Gil (University of Southern California and President-Elect of AAAI) will work with Bart Selman (Cornell University) to lead the effort. The initiative will support the U.S. Administrations’ efforts in this area and involve academic and industrial researchers to help map a course for needed research in AI. They will hold a series of workshops in 2018 and 2019 to produce the Roadmap by Spring of 2019. The Computing Research Association (CRA) has been involved in shaping public policy of relevance to computing research for more than two decades https://cra.org/govaffairs/blog/ The CRA Government Affairs program has enhanced its efforts to help the members of the computing research community contribute to the public debate knowledgeably and effectively.

Ed Felten, Princeton Professor of Computer Science and Public Affairs, has been confirmed by the U.S. Senate to be a member of the U.S. Privacy and Civil Liberties Oversight Board, a bipartisan agency within the executive branch. He will serve as a part-time member of the board while continuing his teaching and research at Princeton. The five-person board is charged with evaluating and advising on executive branch anti-terrorism measures with respect to privacy and civil liberties. “It is a very important issue,” Felten said. “Federal agencies, in the course of doing national security work, have access to a lot of data about people and they do intercept data. It’s important to make sure they are doing those things in the way they should and not overstepping.” Felten added that the board has the authority to review programs that require secrecy. “The public has limited visibility into some of these programs,” Felten said. “The board’s job is to look out for the public interest.”

On October 24, 2018, the National Academies of Sciences, Engineering, and Medicine Forum on Aging, Disability, and Independence will host a workshop in Washington, DC that will explore the potential of artificial intelligence (AI) to foster a balance of safety and autonomy for older adults and people with disabilities who strive to live as independently as possible http://nationalacademies.org/hmd/Activities/Aging/AgingDisabilityForum/2018-OCT-24.aspx

According to Reuters, Amazon scrapped an AI recruiting tool that showed bias against women in automated employment screening.

ML Safety by Design

In a recent post, we discussed the need for policymakers to think of AI and Autonomous Systems (AI/AS) always needing varying degrees of the human role (“hybrid” human/machine systems). Understanding the potential and limitations of combining technologies and humans is important for realistic policymaking. A key element, along with accurate forecasts of the changes in technology, is the safety of AI/AS-Human products as discussed in the IEEE report “Ethically Aligned Design”, which is subtitled “A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems”, and Ben Shneiderman’s excellent summary and comments on the report as well as the YouTube video of his Turing Institute Lecture on “Algorithmic Accountability: Design for Safety”.

In Shneiderman’s proposal for a National Algorithms Safety Board, he writes “What might help are traditional forms of independent oversight that use knowledgeable people who have powerful tools to anticipate, monitor, and retrospectively review operations of vital national services. The three forms of independent oversight that have been used in the past by industry and governments—planning oversight, continuous monitoring by knowledgeable review boards using advanced software, and a retrospective analysis of disasters—provide guidance for responsible technology leaders and concerned policy makers. Considering all three forms of oversight could lead to policies that prevent inadequate designs, biased outcomes, or criminal actions.”

Efforts to provide “safety by design” include work at Google on Human-Centered Machine Learning and a general “human-centered approach that foregrounds responsible AI practices and products that work well for all people and contexts. These values of responsible and inclusive AI are at the core of the AutoML suite of machine learning products …”
Further work is needed to systemize and enforce good practices in human-centered AI design and development, including algorithmic transparency and guidance for selection of unbiased data used in machine learning systems.

2018 ACM SIGAI Student Essay Contest on Artificial Intelligence Technologies

After the success of our 2017 version of the contest we are happy to announce another round of the ACM SIGAI Student Essay Contest on Artificial Intelligence Technologies!

Download a PDF of the call here: https://tinyurl.com/SIGAIEssay2018

Win one of several $500 monetary prizes or a Skype conversation with a leading AI researcher including Joanna Bryson, Murray Campbell, Eric Horvitz, Peter Norvig, Iyad Rahwan, Francesca Rossi, or Toby Walsh.

We have extended the deadline to February 15th, 2019, Anywhere on Earth Time Zone.  Please get your submissions in!!

Students interested in these topics should consider submitting to the 2019 Artificial Intelligence, Ethics, and Society Conference and/or Student Program — Deadline is in early November.  See the website for all the details.

2018 Topic

The ACM Special Interest Group on Artificial Intelligence (ACM SIGAI) supports the development and responsible application of Artificial Intelligence (AI) technologies. From intelligent assistants to self-driving cars, an increasing number of AI technologies now (or soon will) affect our lives. Examples include Google Duplex (Link) talking to humans, Drive.ai (Link) offering rides in US cities, chatbots advertising movies by impersonating people (Link), and AI systems making decisions about parole (Link) and foster care (Link). We interact with AI systems, whether we know it or not, every day.

Such interactions raise important questions. ACM SIGAI is in a unique position to shape the conversation around these and related issues and is thus interested in obtaining input from students worldwide to help shape the debate. We therefore invite all students to enter an essay in the 2018 ACM SIGAI Student Essay Contest, to be published in the ACM SIGAI newsletter “AI Matters,” addressing one or both of the following topic areas (or any other question in this space that you feel is important) while providing supporting evidence:

  • What requirements, if any, should be imposed on AI systems and technology when interacting with humans who may or may not know that they are interacting with a machine?  For example, should they be required to disclose their identities? If so, how? See, for example, “Turing’s Red Flag” in CACM (Link).
  • What requirements, if any, should be imposed on AI systems and technology when making decisions that directly affect humans? For example, should they be required to make transparent decisions? If so, how?  See, for example, the IEEE’s summary discussion of Ethically Aligned Design (Link).

Each of the above topic areas raises further questions, including

  • Who is responsible for the training and maintenance of AI systems? See, for example, Google’s (Link), Microsoft’s (Link), and IBM’s (Link) AI Principles.
  • How do we educate ourselves and others about these issues and possible solutions? See, for example, new ways of teaching AI ethics (Link).
  • How do we handle the fact that different cultures see these problems differently?  See, for example, Joi Ito’s discussion in Wired (Link).
  • Which steps can governments, industries, or organizations (including ACM SIGAI) take to address these issues?  See, for example, the goals and outlines of the Partnership on AI (Link).

All sources must be cited. However, we are not interested in summaries of the opinions of others. Rather, we are interested in the informed opinions of the authors. Writing an essay on this topic requires some background knowledge. Possible starting points for acquiring such background knowledge are:

  • the revised ACM Code of Ethics (Link), especially Section 3.7, and a discussion of why the revision was necessary (Link),
  • IEEE’s Ethically Aligned Design (Link), and
  • the One Hundred Year Study on AI and Life in 2030 (Link).

ACM and ACM SIGAI

ACM brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. As the world’s largest computing society, ACM strengthens the profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM’s reach extends to every part of the globe, with more than half of its 100,000 members residing outside the U.S.  Its growing membership has led to Councils in Europe, India, and China, fostering networking opportunities that strengthen ties within and across countries and technical communities. Their actions enhance ACM’s ability to raise awareness of computing’s important technical, educational, and social issues around the world. See https://www.acm.org/ for more information.

ACM SIGAI brings together academic and industrial researchers, practitioners, software developers, end users, and students who are interested in AI. It promotes and supports the growth and application of AI principles and techniques throughout computing, sponsors or co-sponsors AI-related conferences, organizes the Career Network and Conference for early-stage AI researchers, sponsors recognized AI awards, supports AI journals, provides scholarships to its student members to attend conferences, and promotes AI education and publications through various forums and the ACM digital library. See https://sigai.acm.org/ for more information.

Format and Eligibility

The ACM SIGAI Student Essay Contest is open to all ACM SIGAI student members at the time of submission.  (If you are a student but not an ACM SIGAI member, you can join ACM SIGAI before submission for just US$ 11 at https://goo.gl/6kifV9 by selecting Option 1, even if you are not an ACM member.) Essays can be authored by one or more ACM SIGAI student members but each ACM SIGAI student member can (co-)author only one essay.

All authors must be SIGAI members at the time of submission.  All submissions not meeting this requirement will not be reviewed.

Essays should be submitted as pdf documents of any style with at most 5,000 words via email to https://easychair.org/conferences/?conf=acmsigai2018.

The deadline for submissions is January 10th, 2019.

We have extended the deadline to February 15th, 2019, Anywhere on Earth Time Zone.  Please get your submissions in!!

The authors certify with their submissions that they have followed the ACM publication policies on “Author Representations,” “Plagiarism” and “Criteria for Authorship” (http://www.acm.org/publications/policies/). They also certify with their submissions that they will transfer the copyright of winning essays to ACM.

Judges and Judging Criteria

Winning entries from last year’s essay contest can be found in recent issues of the ACM SIGAI newsletter “AI Matters,” specifically  Volume 3, Issue 3: http://sigai.acm.org/aimatters/3-3.html and  Volume 3, Issue 4: http://sigai.acm.org/aimatters/3-4.html.

Entries will be judged by the following panel of leading AI researchers and ACM SIGAI officers. Winning essays will be selected based on depth of insight, creativity, technical merit, and novelty of argument. All decisions by the judges are final.

    • Rediet Abebe, Cornell University
    • Emanuelle Burton, University of Illinois at Chicago
    • Sanmay Das, Washington University in St. Louis  
    • John P. Dickerson, University of Maryland
    • Virginia Dignum, Delft University of Technology
    • Tina Eliassi-Rad, Northeastern University
    • Judy Goldsmith, University of Kentucky
    • Amy Greenwald, Brown University
    • H. V. Jagadish, University of Michigan
    • Sven Koenig, University of Southern California  
    • Benjamin Kuipers, University of Michigan  
    • Nicholas Mattei, IBM Research
    • Alexandra Olteanu, Microsoft Research
    • Rosemary Paradis, Leidos
    • Kush Varshney, IBM Research
    • Roman Yampolskiy, University of Louisville
  • Yair Zick, National University of Singapore  

Prizes

All winning essays will be published in the ACM SIGAI newsletter “AI Matters.” ACM SIGAI provides five monetary awards of USD 500 each as well as 45-minute skype sessions with the following AI researchers:

    • Joanna Bryson, Reader (Assoc. Prof) in AI, University of Bath
    • Murray Campbell, Senior Manager, IBM Research AI
    • Eric Horvitz, Managing Director, Microsoft Research
    • Peter Norvig, Director of Research, Google
    • Iyad Rahwan, Associate Professor, MIT Media Lab and Head of Scalable Corp.
    • Francesca Rossi, AI and Ethics Global Lead, IBM Research AI
  • Toby Walsh, Scientia Professor of Artificial Intelligence, UNSW Sydney, Data61 and TU Berlin

One award is given per winning essay. Authors or teams of authors of winning essays will pick (in a pre-selected order) an available skype session or one of the monetary awards until all skype sessions and monetary awards have been claimed. ACM SIGAI reserves the right to substitute a skype session with a different AI researcher or a monetary award for a skype session in case an AI researcher becomes unexpectedly unavailable. Some prizes might not be awarded in case the number of high-quality submissions is smaller than the number of prizes.

Questions?

In case of questions, please first check the ACM SIGAI blog for announcements and clarifications: https://sigai.acm.org/aimatters/blog/. You can also contact the ACM SIGAI Student Essay Contest Organizers at sigai@member.acm.org.

  • Nicholas Mattei (IBM Research) – ACM SIGAI Student Essay Contest Organizer and AI and Society Officer

with involvement from

    • Sven Koenig (University of Southern California), ACM SIGAI Chair
    • Sanmay Das (Washington University in St. Louis), ACM SIGAI Vice Chair
    • Rosemary Paradis (Leidos), ACM SIGAI Secretary/Treasurer
    • Benjamin Kuipers (University of Michigan), ACM SIGAI Ethics Officer
  • Amy McGovern (University of Oklahoma), ACM SIGAI AI Matters Editor-in Chief