OpenAI

A recent controversy erupted over OpenAI’s new version of their language model for generating well-written next words of text based on unsupervised analysis of large samples of writing. Their announcement and decision not to follow open-source practices raises interesting policy issues about regulation and self-regulation of AI products. OpenAI, a non-profit AI research company founded by Elon Musk and others, announced on February 14, 2019, that “We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.”

The reactions to the announcement followed from the decision behind the following statement in the release: “Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

Examples of the many reactions are TechCrunch.com and Wired. The Electronic Frontier Foundation has an analysis of the manner of the release (letting journalists know first) and concludes, “when an otherwise respected research entity like OpenAI makes a unilateral decision to go against the trend of full release, it endangers the open publication norms that currently prevail in language understanding research.”

This issue is an example of previous ideas in our Public Policy blog about who, if anyone, should regulate AI developments and products that have potential negative impacts on society. Do we rely on self-regulation or require governmental regulations? What if the U.S. has regulations and other countries do not? Would a clearinghouse approach put profit-based pressure on developers and corporations? Can the open source movement be successful without regulatory assistance?

Interview with Thomas Dietterich

Introduction

Welcome to the eighth interview in our se- ries profiling senior AI researchers. This month we are especially happy to interview our SIGAI advisory board member, Thomas Dietterich, Director of Intelligent Systems at the Institute for Collaborative Robotics and In- telligence Systems (CoRIS) at Oregon State University.

Tom Dietterich

Biography

Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stan- ford University 1984) is Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University, where he joined the faculty in 1985. Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 200 refereed publications and two books. His research is motivated by challenging real world problems with a special focus on ecological science, ecosystem management, and sustainable development. He is best known for his work on ensemble methods in machine learning including the development of error- correcting output coding. Dietterich has also invented important reinforcement learning algorithms including the MAXQ method for hierarchical reinforcement learning. Dietterich has devoted many years of service to the research community. He served as President of the Association for the Advancement of Artificial Intelligence (2014-2016) and as the founding president of the International Machine Learning Society (2001-2008). Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and Program Chair of AAAI 1990 and NIPS 2000. Dietterich is a Fellow of the ACM, AAAI, and AAAS.

Getting to Know Tom Dietterich

When and how did you become interested in CS and AI?

I learned to program in Basic in my early teens; I had an uncle who worked for GE on their time-sharing system. I learned Fortran in high school. I tried to build my own adding machine out of TTL chips around that time too. However, despite this interest, I didn’t really know what CS was until I reached graduate school at the University of Illinois. I first engaged with AI when I took a graduate assistant position with Ryszard Michalski on what became machine learning, and I took an AI class from Dave Waltz. I had also studied phi- losophy of science in college, so I had already thought a bit about how we acquire knowledge from data and experiment.

What would you have chosen as your career if you hadn’t gone into CS?

I had considered going into foreign service, and I have always been interested in policy issues. I might also have gone into technical management. Both of my brothers have been successful technical managers.

What do you wish you had known as a Ph.D. student or early researcher?

I wish I had understood the importance of strong math skills for CS research. I was a software engineer before I was a computer science researcher, and it took me a while to understand the difference. I still struggle with the difference between making an incremental advance within an existing paradigm versus asking fundamental questions that lead to new research paradigms.

What professional achievement are you most proud of?

Developing the MAXQ formalism for hierarchical reinforcement learning.

What is the most interesting project you are currently involved with?

I’m fascinated by the question of how machine learning predictors can have models of their own competence. This is important for mak- ing safe and robust AI systems. Today, we have ML methods that give accurate predictions in aggregate, but we struggle to provide point-wise quantification of uncertainty. Related to these questions are algorithms for anomaly detection and open category detection. In general, we need AI systems that can work well even in the presence of “unknown unknowns”.

Recent advances in AI led to many success stories of AI technology undertaking real-world problems. What are the challenges of deploying AI systems?

AI systems are software systems, so the main challenges are the same as with any soft- ware system. First, are we building the right system? Do we correctly understand the users’ needs? Have we correctly expressed user preferences in our reward functions, constraints, and loss functions? Have we done so in a way that respects ethical standards? Second, have we built the system we intended to build? How can we test software com- ponents created using machine learning? If the system is adapting online, how can we achieve continuous testing and quality assurance? Third, when ML is employed, the re- sulting software components (classifiers and similar predictive models) will fail if the input data distribution changes. So we must mon- itor the data distribution and model the pro- cess by which the data are being generated. This is sometimes known as the problem of “model management”. Fourth, how is the deployed system affecting the surrounding social and technical system? Are there unintended side-effects? Is user or institutional behavior changing as a result of the deployment?

One promising approach is combining humans and AI into a collaborative team. How can we design such a system to successfully tackle challenging high-risk applications? Who should be in charge, the human or the AI?

I have addressed this in a recent short paper (Robust Artificial Intelligence and Robust Human Organizations. Frontiers of Computer Science, 13(1): 1-3). To work well in high- risk applications, human teams must function as so-called “High reliability organizations” or HROs. When we add AI technology to such teams, we must ensure that it contributes to their high reliability rather than disrupting and degrading it. According to organizational researchers, HROs share five main practices: (a) continuous attention to anomalous and near-miss events, (b) seeking diverse explanations for such events, (c) maintaining continuous situational awareness, (d) practicing improvisational problem solving, and (e) delegating decision making authority to the team member who has the most expertise about the specific decision regardless of rank. AI systems in HROs must implement these five practices as well. They must be constantly watch- ing for anomalies and near misses. They must seek multiple explanations for such events (e.g., via ensemble methods). They must maintain situational awareness. They must support joint human-machine improvisational problem solving, such as mixed-initiative plan- ning. And they must build models of the expertise of each team member (including them- selves) to know which team member should make the final decision in any situation.

You ask “Who is in charge?” I’m not sure that is the right question. Our goal is to create human-machine teams that are highly reliable as a team. In an important sense, this means every member of the team has responsibil- ity for robust team performance. However, from an ethical standpoint, I think the human team leader should have ultimate responsibil- ity. That task of taking action in a specific situation could be delegated to the AI system, but the team leader has the moral responsibility for that action.

Moving towards transforming AI systems into high-reliable organizations, how can diversity help to achieve this goal?

Diversity is important for generating multiple hypotheses to explain anomalies and near misses. Experience in hospital operating rooms is that often it is the nurses who first detect a problem or have the right solution. The same has been noted in nuclear power plant operations. Conversely, teams often fail when the engage in “group think” and fixate on an incorrect explanation for a problem.

How do you balance being involved in so many different aspects of the AI community?

I try to stay very organized and manage my time carefully. I use a machine learning system called TAPE (Tagging Assistant for Productive Email) developed by my collaborator and student Michael Slater to automatically tag and organize my email. I also take copi- ous notes in OneNote. Oh, and I work long hours…

What was your most difficult professional decision and why?

The most difficult decision is to tell a PhD student that they are not going to succeed in completing their degree. All teachers and mentors are optimistic people. When we meet a new student, we hope they will be very successful. But when it is clear that a student isn’t going to succeed, that is a deep disappointment for the student (of course) but also for the professor.

What is your favorite AI-related movie or book and why?

I really don’t know much of the science fiction literature (in books or films). My favorite is 2001: A Space Odyssey because I think it depicts most accurately how AI could lead to bad outcomes. Unlike in many other stories, HAL doesn’t “go rogue”. Instead, HAL creatively achieves the objective programmed by its creators, unfortunately as a side effect, it kills the crew.

Call for Nominations

Editor-In-Chief ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)

The term of the current Editor-in-Chief (EiC) of the ACM Trans. on Asian and Low-Resource Language Information Processing (TALLIP) is coming to an end, and the ACM Publications Board has set up a nominating committee to assist the Board in selecting the next EiC.  TALLIP was established in 2002 and has been experiencing steady growth, with 178 submissions received in 2017.

Nominations, including self nominations, are invited for a three-year term as TALLIP EiC, beginning on June 1, 2019.  The EiC appointment may be renewed at most one time. This is an entirely voluntary position, but ACM will provide appropriate administrative support.

Appointed by the ACM Publications Board, Editors-in-Chief (EiCs) of ACM journals are delegated full responsibility for the editorial management of the journal consistent with the journal’s charter and general ACM policies. The Board relies on EiCs to ensure that the content of the journal is of high quality and that the editorial review process is both timely and fair. He/she has final say on acceptance of papers, size of the Editorial Board, and appointment of Associate Editors. A complete list of responsibilities is found in the ACM Volunteer Editors Position Descriptions. Additional information can be found in the following documents:

Nominations should include a vita along with a brief statement of why the nominee should be considered. Self-nominations are encouraged, and should include a statement of the candidate’s vision for the future development of TALLIP. The deadline for submitting nominations is April 15, 2019, although nominations will continue to be accepted until the position is filled.

Please send all nominations to the nominating committee chair, Monojit Choudhury (monojitc@microsoft.com).

The search committee members are:

  • Monojit Choudhury (Microsoft Research, India), Chair
  • Kareem M. Darwish (Qatar Computing Research Institute, HBKU)
  • Tei-wei Kuo (National Taiwan University & Academia Sinica) EiC of ACM Transactions on Cyber-Physical Systems; Vice Chair, ACM SIGAPP
  • Helen Meng, (Chinese University of Hong Kong)
  • Taro Watanabe (Google Inc., Tokyo)
  • Holly Rushmeier (Yale University), ACM Publications Board Liaison

AI Hype Not

A recent item in Science|Business “Artificial intelligence nowhere near the real thing, says German AI chief”, by Éanna Kelly, gives policy-worthy warnings and ideas. “In his 20 years as head of Germany’s biggest AI research lab Wolfgang Wahlster has seen the tech hype machine splutter three times. As he hands over to a new CEO, he warns colleagues: ‘Don’t over-promise’.the computer scientist who has just ended a 20 year stint as CEO of the German Research Centre for Artificial Intelligence says that [warning] greatly underestimates the distance between AI and its human counterpart: ‘We’re years away from a game changer in the field. I always warn people, one should be a bit careful with what they claim. Every day you work on AI, you see the big gap between human intelligence and AI’, Wahlster told Science|Business.”

For AI policy, we should remember to look out for over promising, but we also need to be mindful of the time frame for making effective policy and be fully engaged now. Our effort importantly informs policymakers about the real opportunities to make AI successful.  A recent article in The Conversation by Ben Shneiderman “What alchemy and astrology can teach artificial intelligence researchers,” gives insightful information and advice on how to avoid being distracted away “… from where the real progress is already happening: in systems that enhance – rather than replace – human capabilities.” Shneiderman recommends that technology designers shift “from trying to replace or simulate human behavior in machines to building wildly successful applications that people love to use.”

AAII and EAAI

President Trump issued an Executive Order on February 11, 2019, entitled “Maintaining American Leadership In Artificial Intelligence”. The full text is at https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/The American AI Initiative of course needs analysis and implementation details. Two sections of the Executive Order give hope for opportunities to provide public input:

Sec (5)(a)(1)(i): Within 90 days of the date of this order, the OMB Director shall publish a notice in the Federal Register inviting the public to identify additional requests for access or quality improvements for Federal data and models that would improve AI R&D and testing. …[T]hese actions by OMB will help to identify datasets that will facilitate non-Federal AI R&D and testing.
and
Sec (6)(b)
(b)  To help ensure public trust in the development and implementation of AI applications, OMB shall issue a draft version of the memorandum for public comment before it is finalized.
Please stay tuned for ways that our ACM US Technology Policy Committee (USTPC) can help us provide our feedback on the implementation of the Executive Order.

A summary and analysis report is available from the Center for Data Innovation: Executive Order Will Help Ensure U.S. Leadership in AI. They comment that the administration “needs to do more than reprogram existing funds for AI research, skill development, and infrastructure development” and “should ask Congress for significant funding increases to (a) expand these research efforts;
(b) implement light-touch regulation for AI;
(c) resist calls to implement roadblocks or speed bumps for this technology, including export restrictions;
(d) rapidly expand adoption of AI within government,
implement comprehensive reforms to the nation’s workforce training and adjustment policies.”

The latter point was a topic in my invited talk at EAAI-19. Opportunities and innovation in education and training for the workforce of the future rely crucially on public policymaking about workers in the era of increasing use of AI and other automation technologies. An important issue is who will provide training that is timely (by 2030), practical, and affordable for workers who are impacted by job disruptions and transitioning to the new predicted post-automation jobs. The stakeholders along with workers are the schools, employers, unions, community groups, and others. Even if more jobs are created than lost, work in the AI future is disproportionately available in the range of people in the current and near-future workforce.

Section 1 of the Executive Order “Maintaining American Leadership In Artificial Intelligence” follows:
Section 1.  Policy and Principles.Artificial Intelligence (AI) promises to drive growth of the United States economy, enhance our economic and national security, and improve our quality of life. The United States is the world leader in AI research and development (R&D) and deployment.  Continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with our Nation’s values, policies, and priorities.  The Federal Government plays an important role in facilitating AI R&D, promoting the trust of the American people in the development and deployment of AI-related technologies, training a workforce capable of using AI in their occupations, and protecting the American AI technology base from attempted acquisition by strategic competitors and adversarial nations.  Maintaining American leadership in AI requires a concerted effort to promote advancements in technology and innovation, while protecting American technology, economic and national security, civil liberties, privacy, and American values and enhancing international and industry collaboration with foreign partners and allies.  It is the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy, the American AI Initiative (Initiative), guided by five principles:
(a)  The United States must drive technological breakthroughs in AI across the Federal Government, industry, and academia in order to promote scientific discovery, economic competitiveness, and national security.
(b)  The United States must drive development of appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by today’s industries.
(c)  The United States must train current and future generations of American workers with the skills to develop and apply AI technologies to prepare them for today’s economy and jobs of the future.
(d)  The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.
(e)  The United States must promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting our technological advantage in AI and protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.

Interview with Iolanda Leite

Introduction

This column is the seventh in our series pro- filing senior AI researchers. This month we are happy to interview Iolanda Leite, Assistant Professor at the School of Computer Science and Electrical Engineering at the KTH Royal Institute of Technology in Sweden. This is a great opportunity to get to know Iolanda, the new AI Matters co-editor in chief. Welcome on board!

Biography

Iolanda Leite is an Assistant Professor at the School of Computer Science and Electri- cal Engineering at the KTH Royal Institute of Technology in Sweden. She holds a PhD in Information Systems and Computer Engineer- ing from IST, University of Lisbon. Prior to join- ing KTH, she was a Research Assistant at the Intelligent Agents and Synthetic Characters Group at INESC-ID Lisbon, a Postdoctoral As- sociate at the Yale Social Robotics Lab and an Associate Research Scientist at Disney Re- search Pittsburgh. Iolanda’s research inter- ests are in the areas of Human-Robot Inter- action and Artificial Intelligence. She aims to develop autonomous socially intelligent robots that can assist people over long periods of time.

Getting to Know Iolanda Leite

When and how did you become interested in CS and AI?

I became interested in CS at the age of 4 when the first computer arrived at our home. It is more difficult to establish a time to define my interest in AI. I was born in the 80s and have always been fascinated by toys that had some level of “intelligence” or “life-likeness” like the Tamagotchi or the Furby robots. During my Master’s degree, I chose the Intelligent Sys- tems specialization. That time was probably when I seriously considered a research career in this area.

What professional achievement are you most proud of?

Seeing my students accomplish great things on their own.

What would you have chosen as your career if you hadn’t gone into CS?

I always loved to work with children so maybe something related to child education.

What do you wish you had known as a Ph.D. student or early researcher?

As an early researcher I often had a hard time dealing with the rejection of papers, applica- tions, etc. What I wish the “past me” could know is that if one keeps working hard, things will eventually work out well in the end. In other words, keeping faith in the system.

What is the most interesting project you are currently involved with?

All of them! If I have to highlight one, we are working with elementary schools that have classes of newly arrived children in a project where we are using social robots to promote inclusion between newly arrived and local chil- dren. This is part of an early career fellowship awarded by the Jacobs Foundation.

We currently observe many promising and exciting advances in using AI in education, going beyond automating Piazza answering, how should we make use of AI to teach AI?

I believe that AI can be used to complement teachers and provide personalized instruction to students of all ages and in a variety of top- ics. Robotic tutors can play an important role in education because the mere physical pres- ence of a robot has shown to have a positive impact on how much information students can recall, for example when compared to a virtual agent displayed in a computer screen deliver- ing the exact same content.

How can we make AI more diverse? Do you have a concrete idea on what we as (PhD) students, researchers, and educators in AI can do to increase diversity our field?

Something we can all do is to participate in outreach initiatives targeting groups underrep- resented in AI to show them that there is space for them in the community. If we start bottom-up, in the long-term I am positive that our community will be more diverse at all lev- els and the bias in opportunities, recruiting, etc. will go away.

What was your most difficult professional decision and why?

Leaving my home country (Portugal) after fin- ishing my PhD to continue my research career because I miss my family and friends, and also the good weather!

How do you balance being involved in so many different aspects of the AI community?

I love what I do and I currently don’t have any hobbies 🙂

AI is grown up – it’s time to make use of it for good. Which real-world problem would you like to see solved by AI in the future?

If AI could fully address any of the Sustainable Development Goals established by the United Nations, it would be (more than) great. Al- though there are excellent research and fund- ing initiatives in that direction, we are still not there yet.

What is your favorite AI-related movie or book and why?

One of my favorite ones recently was the Westworld TV Series because of the power relationships between the human and the robotic characters. I find it hard to believe that humans will treat robots the way they are treated in the series, but it makes me reflect on how our future interactions with technol- ogy that is becoming more personalized and “human-like” might look like.

Autonomous Vehicles: Policy and Technology

In 2018, we discussed language that aims at safety and degrees of autonomy rather than having, possibly unattainable, goals of completely autonomous things. A better approach, at least for the next 5-10 years, is to seek the correct balance between technology and humans in hybrid devices and systems. See for example, the Unmanned Integrated Systems Roadmap, 2017-2042 and Ethically Aligned Design. We also need to consider the limits and possibilities for research on the technologies and their impacts on time frames and the proper focus of policymaking.

In a recent interview, Dr. Harold Szu, a co-founder and former governor of the International Neural Network Society, discusses research ideas that better mimic human thinking and that could dramatically reduce the time to develop autonomous technology. He discusses a possible new level of brain-style computing that incorporates fuzzy membership functions into autonomous control systems. Autonomous technology incorporating human characteristics, along with safe policies and earlier arrival of brain-style technologies, could usher in the next big economic boom. For more details, view the Harold Szu interview.

Discussion Issues for 2019

FaceBook, Face Recognition, Autonomous Things, and the Future of Work

Four focus areas of discussions at the end of 2018 are the initial topics for the SIGAI Policy Blog as we start 2019.  The following, with links to resources, are important ongoing subjects for our Policy blogsite in the new year:

FaceBook continues to draw attention to the general issue of data privacy and the role of personal data in business models. Here are some good resources to check:
NY Times on FaceBook Privacy
FaceBook Partners
Spotify
Netflix

Facial recognition software is known to be flawed, having side effects of bias, unwanted surveillance, and other problems. The Safe Face Pledge, developed by the Algorithmic Justice League and Georgetown University Law Center of Privacy & Technology, is an example of emerging efforts to make organizations aware of problems with facial recognition products, for example in autonomous weapons systems and law enforcement agencies. The Safe Face Pledge asks that companies commit to safety in business practices and promote public policy for broad regulation and government oversight on facial recognition applications.

“Autonomous” Things: Degrees of Separation: The R&D for “autonomous” vehicles and other devices that dominate our daily lives pose challenges for technologies as well as for ethics and policy considerations. In 2018, we discussed language that aims at safety and degrees of autonomy rather than having, possibly unattainable, goals of completely autonomous things. A better approach may be to seek the correct balance between technology and humans in hybrid devices and systems. See for example, the Unmanned Integrated Systems Roadmap, 2017-2042 and Ethically Aligned Design.

The Future of Work and Education is a topic that not only tries to predict the workforce of the future, but also how society needs to prepare for it. Many experts believe that our current school systems are not up to the challenge and that industry and government programs are needed for the challenges emerging in just a few years. See, for example, writing by the Ford Foundation and the World Economic Forum.

We welcome your feedback and discussions as we enter the 2019 world of AI and policy!

ACM SIGAI Industry Award for Excellence in Artificial Intelligence

The ACM SIGAI Industry Award for Excellence in Artificial Intelligence (AI) will be given annually to individuals or teams who created AI applications in recent years in ways that demonstrate the power of AI techniques via a combination of the following features: novelty of application area, novelty and technical excellence of the approach, importance of AI techniques for the approach, and actual and predicted societal impact of the application. The award plaque is accompanied by a prize of $5,000 and will be awarded at the International Joint Conference on Artificial Intelligence through an agreement with the IJCAI Board of Trustees.

After decades of progress in the theory of AI, research and development, AI applications are now increasingly moving into the commercial sector. A great deal of pioneering application-level work is being done—from startups to large corporations—and this is influencing commerce and the broad public in a wide variety of ways. This award complements the numerous academic, best paper and related awards, in that it focuses on innovators of fielded AI applications, honoring those who are playing key roles in AI commercialization. The award honors these innovators and highlights their achievements (and thus also the benefit of AI techniques) to computing professionals and the public at large. The award committee will consider applications that are open source or proprietary and that may or may not involve hardware.

Evaluation criteria:
The criteria include the following, but there is no fixed weighting of them:

  • Novelty of application area
  • Novelty and technical excellence of the approach
  • Importance of AI techniques for the approach
  • Actual and predicted societal benefits of the fielded application

Eligibility criteria:
Any individual or team, worldwide, is eligible for the award.

Nomination procedure:
One nomination and three endorsements must be submitted. The nomination must identify the individual or team members, describe their fielded AI system, and explain how it addresses the award criteria. The nomination must be written by a member of ACM SIGAI. Two of the endorsements must be from members of ACM or ACM SIGAI. Anyone can join ACM SIGAI at any time for just US$11 (students) and US$25 (other) annual membership fee, even if they are not an ACM member.

Please submit the nomination and endorsements as a single PDF file in an email to SIGAIIndustryAward@ACM.org. We will acknowledge receipt of the nomination.

Timeline:

  • Nominations Due: March 1, 2019
  • Award Announcement: April 25, 2019
  • Award Presentation: August 10-16, 2019 at IJCAI in Macao (China)

Call for Proposals: Artificial Intelligence Activities Fund

ACM SIGAI invites funding proposals for artificial intelligence (AI) activities with a strong outreach component to either students, researchers, or practitioners not working on AI technologies or to the public in general.

The purpose of this call is to promote a better understanding of current AI technologies, including their strengths and limitations, as well as their promise for the future. Examples of fundable activities include (but are not limited to) AI technology exhibits or exhibitions, holding meetings with panels on AI technology (including on AI ethics) with expert speakers, creating podcasts or short films on AI technologies that are accessible to the public, and holding AI programming competitions. ACM SIGAI will look for evidence that the information presented by the activity will be of high quality, accurate, unbiased (for example, not influenced by company interests), and at the right level for the intended audience.

ACM SIGAI has set aside $10,000 to provide grants of up to $2,000 each, with priority given to a) proposals from ACM affiliated organizations other than conferences (such as ACM SIGAI chapter or ACM chapters), b) out-of-the-box ideas, c) new activities (rather than existing and recurring activities), d) activities with long-term impact, e) activities that reach many people, and f) activities co-funded by others. We prefer not to fund activities for which sufficient funding is already available from elsewhere or that result in profit for the organizers. Note that expert talks on AI technology can typically be arranged with financial support of the ACM Distinguished Speaker program (https://speakers.acm.org/) and then are not appropriate for funding via this call.

A proposal should contain the following information on at most 3 pages:

  • a description of the activity (including when and where it will be held);
  • a budget for the activity and the amount of funding requested, and whether other organizations have been or will be approached for funding (and, if so, for how much);
  • an explanation of how the activity fits this call (including whether it is new or recurring, which audience it will benefit, and how large the audience is);
  • a description of the organizers and other participants (such as speakers) involved in the activity (including their expertise and their affiliation with ACM SIGAI or ACM);
  • a description of what will happen to the surplus in case there is, unexpectedly, one; and
  • the name, affiliation, and contact details (including postal and email address, phone number, and URL) of the corresponding organizer.

Grantees are required to submit reports to ACM SIGAI following completion of their activities with details on how they utilized the funds and other information which might also be published in the ACM SIGAI newsletter “AI Matters.”

The deadline for submissions is 11:59pm on March 15, 2019 (UTC-12). Proposals should be submitted as pdf documents in any style at

https://easychair.org/conferences/?conf=sigaiaaf2019.

The funding decisions of ACM SIGAI are final and cannot be appealed. Some funding earmarked for this call might not be awarded at the discretion of ACM SIGAI, for example, in case the number of high-quality proposals is not sufficiently large. In case of questions, please first check the ACM SIGAI blog for announcements and clarifications: https://sigai.acm.org/aimatters/blog/. Questions should be directed to Sven Koenig (skoenig@usc.edu).

ACM and ACM SIGAI

ACM brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. As the world’s largest computing society, ACM strengthens the profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM’s reach extends to every part of the globe, with more than half of its 100,000 members residing outside the U.S.  Its growing membership has led to Councils in Europe, India, and China, fostering networking opportunities that strengthen ties within and across countries and technical communities. Their actions enhance ACM’s ability to raise awareness of computing’s important technical, educational, and social issues around the world. See https://www.acm.org/ for more information.

ACM SIGAI brings together academic and industrial researchers, practitioners, software developers, end users, and students who are interested in AI. It promotes and supports the growth and application of AI principles and techniques throughout computing, sponsors or co-sponsors AI-related conferences, organizes the Career Network and Conference for early-stage AI researchers, sponsors recognized AI awards, supports AI journals, provides scholarships to its student members to attend conferences, and promotes AI education and publications through various forums and the ACM digital library. See https://sigai.acm.org/ for more information.

Sven Koenig, ACM SIGAI chair
Sanmay Das, ACM SIGAI vice-chair
Rosemary Paradis, ACM SIGAI secretary/treasurer
Michael Rovatsos, ACM SIGAI conference coordination officer
Nicholas Mattei, ACM SIGAI AI and society officer