GenAI

(Note: This blog post was not created by a GenAI tool. A human brain gathered, organized, and summarized text from several sources to create the blog content.)

The uses of Generative AI (GenAI) systems — including fully automated ones — are raising red flags throughout the business, academic, and legal communities. The ACM Technology Policy Council, US Technology Policy Committee, and Europe Technology Policy Committee are on record with statements and principles addressing these technologies and associated issues.

Principles for the Development, Deployment, and Use of Generative AI Technologies (June 27, 2023)

Generative Artificial Intelligence (GenAI) is a broad term used to describe computing techniques and tools that can be used to create new content including text, speech and audio, images and video, computer code, and other digital artifacts. While such systems offer tremendous opportunities for benefits to society, they also pose very significant risks. The increasing power of GenAI systems, the speed of their evolution, breadth of application, and potential to cause significant or even catastrophic harm means that great care must be taken in researching, designing, developing, deploying, and using them. Existing mechanisms and modes for avoiding such harm likely will not suffice.

This statement puts forward principles and recommendations for best practices in these and related areas based on a technical understanding of GenAI systems. The first four principles address issues regarding limits of use, ownership, personal data control, and correctability. The four principles were derived and adapted from the joint ACM Statement on Principles for Responsible Algorithmic Systems released in October 2022. These pertain to transparency, auditability and contestability, limiting environmental impacts, and security and privacy. This statement also reaffirms and includes five principles from the joint statement as originally formulated and has been informed by the January 2023 ACM TechBrief: Safer Algorithmic Systems. The instrumental principles, consistent with the ACM Code of Ethics, are intended to foster fair, accurate, and beneficial decision-making concerning generative and all other AI technologies:

The first set of generative AI advances rest on very large AI models that are trained on an extremely large corpus of data. Examples that are text-oriented include BLOOM, Chinchilla, GPT-4, LaMDA, and OPT, as well as conversation-oriented models like Bard, ChatGPT, and others. This is a rapidly evolving area, so this list of examples is by no means exhaustive. The principles advanced in this document also are certain to evolve in response to changing circumstances, technological capabilities, and societal norms.

Generative AI models and tools offer significant new opportunities for enhancing numerous online experiences and services, automating tasks normally done by humans, and assisting and enhancing human creativity. From another perspective, such models and tools also have raised significant concerns about multiple aspects of information and its use, including accuracy, disinformation, deception, data collection, ownership, attribution, accountability, transparency, bias, user control, confidentiality, privacy, and security. GenAI also raises important questions, including many about the replacement of human labor and jobs by AI-based machines and automation.

ACM TechBrief on GenAI (Summer 2023 | Issue 8)

This TechBrief is focused on the rapid commercialization of GenAI posing multiple large-scale risks to individuals, society, and the planet. Mitigation requires a rapid, internationally coordinated response to mitigate. The TechBrief presents conclusions concerning AI policy incorporating end-to-end governance approaches that address risks “by design” and regulate at all stages of the design-to-deployment life cycle of AI products, governance mechanisms for GenAI technologies addressing the entirety of their complex supply chains, and actors subject to controls that are proportionate to the scope and scale of the risks their products pose.

Development and Use of Systems to Detect Generative AI Content (under development)

The dramatic increase in the availability, proliferation, and use of GenAI technology in all sectors of society has created concomitant growing demand for systems that can reliably detect when a document, image, or audio file contains information produced in whole or in part by a generative AI system. Specifically, for example,

● educational institutions want systems that can reliably detect when college applications and student assignments were created with the assistance of generative AI systems;

● employers want systems that can detect the use of generative AI in job applications;

● media companies want generative AI systems that can distinguish human comments from responses generated by chatbots; and 

● government agencies need to tell human letters and comments from responses that were algorithmically generated.

Regardless of the demand, such systems are currently not reliably accurate or fair. No presently available detection technology is sufficiently dependable for exclusive support of critical, potentially life- and career-altering decisions. Accordingly, while AI detection systems may provide useful preliminary assessments, their outputs should not be accepted as proof of AI-generated content.

For additional resources, contact the ACM Technology Policy Office
1701 Pennsylvania Ave NW, Suite 200 Washington, DC 20006
+1 202.580.6555 acmpo@acm.org www.acm.org/publicpolicy

AI Policy Matters

As SIGAI Public Policy Officer I have developed links with other policy groups, particularly the ACM US Technology Policy Committee (USTPC). AI has an expanding share of the technology policy area, and as the new Chair of USTPC I plan to report current resources and issues regularly through the AI Matters blog.

ACM and its US Technology Policy Committee are non-profit, non-lobbying, and entirely apolitical. The mission is simply to help policymakers and their staff, the science community, and the public understand all forms of computing technology so they can make technically informed decisions and recommendations. A short list of recent USTPC policy products on artificial intelligence include our latest on Generative AI and Cybersecurity. More information on key issues is here. Sample policy products are

Another ACM policy resource is the TechBrief series of short technical bulletins that present scientifically-grounded perspectives on the impact of specific developments or applications of technology. Designed to complement ACM’s activities in the policy arena, the primary goal is to inform rather than advocate for specific policies. AI topics in recent and upcoming TechBriefs include AI and trust, AI media disinformation, smart cities, safer AI systems, and generative AI.

Future AI Matters blog posts will focus on specific AI public policy projects and resources, and we look forward to blog discussions on these important topics. USTPC always seeks participation from the experts at SIGAI to help identify emerging issues, write policy statements, and present at hearings.

I welcome your ideas in messages to medsker@acm.org and participation in the blog discussions.

Big Issues

Big Tobacco, Big Oil, Big Banks … and Big Tech

A larger discussion is growing out of the recent news about Timnit Gebru and Google. Big Tech is having a huge impact on individuals and society both for the many products and services we enjoy and for the current and potential cases of detrimental effects of unethical behavior or naiveté regarding AI ethics issues. How do we achieve AI ethics responsibility in all organizations, big and small? And, not just in corporations, but governmental and academic research organizations?

Some concerned people focus on regulation, but for a variety of reasons public and community pressure may be quicker and more acceptable. This includes corporations earning reputations for ethical actions in the design and development of AI products and systems. An article in MIT Technology Review by Karen Hao discusses a letter signed by nine members of Congress that “sends an important signal about how regulators will scrutinize tech giants.” Ideally our Public Policy goal is strong AI Ethics national and global communities that self-regulate on AI ethical issues, comparable to other professional disciplines in medical science and cybersecurity. Our AI Ethics community, as guidelines evolve, could provide a supportive and guiding presence in the implementation of ethical norms in the research and development in AI. The idea of a global community is reflected also in a recent speech by European Union President Ursula von der Leyen at the World Leader for Peace and Security Award ceremony. She advocates for transatlantic agreements on AI.

AI Centre of Excellence (AICE)

AICE conducted an inaugural celebration in December, 2020. Director John Kamara founded the AI Centre of Excellence in Kenya and is passionate about creating value and long term impact of AI and ML in Africa. The Centre aims to accomplish this by providing expert training to create skilled and employable AI and ML engineers. The Centre dives into creating sustainable impact through Research and Development. AI research and products are estimated to contribute over $13 trillion to the global economy by 2030. This offers the Centre an opportunity to carry out research in selected sectors and build products based on the research. The world has around 40K AI experts in the world, with nearly half in the US and less than 5% in Africa. Oxford Insights estimates that Kenya ranks first in Africa, and AICE aims to leverage this potential and transform AICE into a full blown Artificial Intelligence Centre of Excellence. Please keep your eyes on Africa and ways our public policy can assist efforts there to grow AI in emerging education and research.

Call for Nominations

Editor-In-Chief ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)

The term of the current Editor-in-Chief (EiC) of the ACM Trans. on Asian and Low-Resource Language Information Processing (TALLIP) is coming to an end, and the ACM Publications Board has set up a nominating committee to assist the Board in selecting the next EiC.  TALLIP was established in 2002 and has been experiencing steady growth, with 178 submissions received in 2017.

Nominations, including self nominations, are invited for a three-year term as TALLIP EiC, beginning on June 1, 2019.  The EiC appointment may be renewed at most one time. This is an entirely voluntary position, but ACM will provide appropriate administrative support.

Appointed by the ACM Publications Board, Editors-in-Chief (EiCs) of ACM journals are delegated full responsibility for the editorial management of the journal consistent with the journal’s charter and general ACM policies. The Board relies on EiCs to ensure that the content of the journal is of high quality and that the editorial review process is both timely and fair. He/she has final say on acceptance of papers, size of the Editorial Board, and appointment of Associate Editors. A complete list of responsibilities is found in the ACM Volunteer Editors Position Descriptions. Additional information can be found in the following documents:

Nominations should include a vita along with a brief statement of why the nominee should be considered. Self-nominations are encouraged, and should include a statement of the candidate’s vision for the future development of TALLIP. The deadline for submitting nominations is April 15, 2019, although nominations will continue to be accepted until the position is filled.

Please send all nominations to the nominating committee chair, Monojit Choudhury (monojitc@microsoft.com).

The search committee members are:

  • Monojit Choudhury (Microsoft Research, India), Chair
  • Kareem M. Darwish (Qatar Computing Research Institute, HBKU)
  • Tei-wei Kuo (National Taiwan University & Academia Sinica) EiC of ACM Transactions on Cyber-Physical Systems; Vice Chair, ACM SIGAPP
  • Helen Meng, (Chinese University of Hong Kong)
  • Taro Watanabe (Google Inc., Tokyo)
  • Holly Rushmeier (Yale University), ACM Publications Board Liaison

ACM SIGAI Industry Award for Excellence in Artificial Intelligence

The ACM SIGAI Industry Award for Excellence in Artificial Intelligence (AI) will be given annually to individuals or teams who created AI applications in recent years in ways that demonstrate the power of AI techniques via a combination of the following features: novelty of application area, novelty and technical excellence of the approach, importance of AI techniques for the approach, and actual and predicted societal impact of the application. The award plaque is accompanied by a prize of $5,000 and will be awarded at the International Joint Conference on Artificial Intelligence through an agreement with the IJCAI Board of Trustees.

After decades of progress in the theory of AI, research and development, AI applications are now increasingly moving into the commercial sector. A great deal of pioneering application-level work is being done—from startups to large corporations—and this is influencing commerce and the broad public in a wide variety of ways. This award complements the numerous academic, best paper and related awards, in that it focuses on innovators of fielded AI applications, honoring those who are playing key roles in AI commercialization. The award honors these innovators and highlights their achievements (and thus also the benefit of AI techniques) to computing professionals and the public at large. The award committee will consider applications that are open source or proprietary and that may or may not involve hardware.

Evaluation criteria:
The criteria include the following, but there is no fixed weighting of them:

  • Novelty of application area
  • Novelty and technical excellence of the approach
  • Importance of AI techniques for the approach
  • Actual and predicted societal benefits of the fielded application

Eligibility criteria:
Any individual or team, worldwide, is eligible for the award.

Nomination procedure:
One nomination and three endorsements must be submitted. The nomination must identify the individual or team members, describe their fielded AI system, and explain how it addresses the award criteria. The nomination must be written by a member of ACM SIGAI. Two of the endorsements must be from members of ACM or ACM SIGAI. Anyone can join ACM SIGAI at any time for just US$11 (students) and US$25 (other) annual membership fee, even if they are not an ACM member.

Please submit the nomination and endorsements as a single PDF file in an email to SIGAIIndustryAward@ACM.org. We will acknowledge receipt of the nomination.

Timeline:

  • Nominations Due: March 1, 2019
  • Award Announcement: April 25, 2019
  • Award Presentation: August 10-16, 2019 at IJCAI in Macao (China)

Call for Proposals: Artificial Intelligence Activities Fund

ACM SIGAI invites funding proposals for artificial intelligence (AI) activities with a strong outreach component to either students, researchers, or practitioners not working on AI technologies or to the public in general.

The purpose of this call is to promote a better understanding of current AI technologies, including their strengths and limitations, as well as their promise for the future. Examples of fundable activities include (but are not limited to) AI technology exhibits or exhibitions, holding meetings with panels on AI technology (including on AI ethics) with expert speakers, creating podcasts or short films on AI technologies that are accessible to the public, and holding AI programming competitions. ACM SIGAI will look for evidence that the information presented by the activity will be of high quality, accurate, unbiased (for example, not influenced by company interests), and at the right level for the intended audience.

ACM SIGAI has set aside $10,000 to provide grants of up to $2,000 each, with priority given to a) proposals from ACM affiliated organizations other than conferences (such as ACM SIGAI chapter or ACM chapters), b) out-of-the-box ideas, c) new activities (rather than existing and recurring activities), d) activities with long-term impact, e) activities that reach many people, and f) activities co-funded by others. We prefer not to fund activities for which sufficient funding is already available from elsewhere or that result in profit for the organizers. Note that expert talks on AI technology can typically be arranged with financial support of the ACM Distinguished Speaker program (https://speakers.acm.org/) and then are not appropriate for funding via this call.

A proposal should contain the following information on at most 3 pages:

  • a description of the activity (including when and where it will be held);
  • a budget for the activity and the amount of funding requested, and whether other organizations have been or will be approached for funding (and, if so, for how much);
  • an explanation of how the activity fits this call (including whether it is new or recurring, which audience it will benefit, and how large the audience is);
  • a description of the organizers and other participants (such as speakers) involved in the activity (including their expertise and their affiliation with ACM SIGAI or ACM);
  • a description of what will happen to the surplus in case there is, unexpectedly, one; and
  • the name, affiliation, and contact details (including postal and email address, phone number, and URL) of the corresponding organizer.

Grantees are required to submit reports to ACM SIGAI following completion of their activities with details on how they utilized the funds and other information which might also be published in the ACM SIGAI newsletter “AI Matters.”

The deadline for submissions is 11:59pm on March 15, 2019 (UTC-12). Proposals should be submitted as pdf documents in any style at

https://easychair.org/conferences/?conf=sigaiaaf2019.

The funding decisions of ACM SIGAI are final and cannot be appealed. Some funding earmarked for this call might not be awarded at the discretion of ACM SIGAI, for example, in case the number of high-quality proposals is not sufficiently large. In case of questions, please first check the ACM SIGAI blog for announcements and clarifications: https://sigai.acm.org/aimatters/blog/. Questions should be directed to Sven Koenig (skoenig@usc.edu).

ACM and ACM SIGAI

ACM brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. As the world’s largest computing society, ACM strengthens the profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM’s reach extends to every part of the globe, with more than half of its 100,000 members residing outside the U.S.  Its growing membership has led to Councils in Europe, India, and China, fostering networking opportunities that strengthen ties within and across countries and technical communities. Their actions enhance ACM’s ability to raise awareness of computing’s important technical, educational, and social issues around the world. See https://www.acm.org/ for more information.

ACM SIGAI brings together academic and industrial researchers, practitioners, software developers, end users, and students who are interested in AI. It promotes and supports the growth and application of AI principles and techniques throughout computing, sponsors or co-sponsors AI-related conferences, organizes the Career Network and Conference for early-stage AI researchers, sponsors recognized AI awards, supports AI journals, provides scholarships to its student members to attend conferences, and promotes AI education and publications through various forums and the ACM digital library. See https://sigai.acm.org/ for more information.

Sven Koenig, ACM SIGAI chair
Sanmay Das, ACM SIGAI vice-chair
Rosemary Paradis, ACM SIGAI secretary/treasurer
Michael Rovatsos, ACM SIGAI conference coordination officer
Nicholas Mattei, ACM SIGAI AI and society officer

Joint AAAI/ACM SIGAI Doctoral Dissertation Award

The Special Interest Group on Artificial Intelligence of the Association for Computing Machinery (ACM SIGAI) and the Association for the Advancement of Artificial Intelligence (AAAI) are happy to announce that they have established the Joint AAAI/ACM SIGAI Doctoral Dissertation Award to recognize and encourage superior research and writing by doctoral candidates in artificial intelligence. This annual award is presented at the AAAI Conference on Artificial Intelligence in the form of a certificate and is accompanied by the option to present the dissertation at the AAAI conference as well as to submit one 6-page summary for both the AAAI proceedings and the newsletter of ACM SIGAI. Up to two Honorable Mentions may also be awarded, also with the option to present their dissertations at the AAAI conference as well as submit one 6-page summary for both the AAAI proceedings and the newsletter of ACM SIGAI. The award will be presented for the first time at the AAAI conference in 2020 for dissertations that have been successfully defended (but not necessarily finalized) between October 1, 2018 and September 30, 2019. Nominations are welcome from any country, but only English language versions will be accepted. Only one nomination may be submitted per Ph.D. granting institution, including large universities. Dissertations will be reviewed for relevance to artificial intelligence, technical depth and significance of the research contribution, potential impact on theory and practice, and quality of presentation. The details of the nomination process will be announced in early 2019.

2018 ACM SIGAI Student Essay Contest on Artificial Intelligence Technologies

After the success of our 2017 version of the contest we are happy to announce another round of the ACM SIGAI Student Essay Contest on Artificial Intelligence Technologies!

Download a PDF of the call here: https://tinyurl.com/SIGAIEssay2018

Win one of several $500 monetary prizes or a Skype conversation with a leading AI researcher including Joanna Bryson, Murray Campbell, Eric Horvitz, Peter Norvig, Iyad Rahwan, Francesca Rossi, or Toby Walsh.

We have extended the deadline to February 15th, 2019, Anywhere on Earth Time Zone.  Please get your submissions in!!

Students interested in these topics should consider submitting to the 2019 Artificial Intelligence, Ethics, and Society Conference and/or Student Program — Deadline is in early November.  See the website for all the details.

2018 Topic

The ACM Special Interest Group on Artificial Intelligence (ACM SIGAI) supports the development and responsible application of Artificial Intelligence (AI) technologies. From intelligent assistants to self-driving cars, an increasing number of AI technologies now (or soon will) affect our lives. Examples include Google Duplex (Link) talking to humans, Drive.ai (Link) offering rides in US cities, chatbots advertising movies by impersonating people (Link), and AI systems making decisions about parole (Link) and foster care (Link). We interact with AI systems, whether we know it or not, every day.

Such interactions raise important questions. ACM SIGAI is in a unique position to shape the conversation around these and related issues and is thus interested in obtaining input from students worldwide to help shape the debate. We therefore invite all students to enter an essay in the 2018 ACM SIGAI Student Essay Contest, to be published in the ACM SIGAI newsletter “AI Matters,” addressing one or both of the following topic areas (or any other question in this space that you feel is important) while providing supporting evidence:

  • What requirements, if any, should be imposed on AI systems and technology when interacting with humans who may or may not know that they are interacting with a machine?  For example, should they be required to disclose their identities? If so, how? See, for example, “Turing’s Red Flag” in CACM (Link).
  • What requirements, if any, should be imposed on AI systems and technology when making decisions that directly affect humans? For example, should they be required to make transparent decisions? If so, how?  See, for example, the IEEE’s summary discussion of Ethically Aligned Design (Link).

Each of the above topic areas raises further questions, including

  • Who is responsible for the training and maintenance of AI systems? See, for example, Google’s (Link), Microsoft’s (Link), and IBM’s (Link) AI Principles.
  • How do we educate ourselves and others about these issues and possible solutions? See, for example, new ways of teaching AI ethics (Link).
  • How do we handle the fact that different cultures see these problems differently?  See, for example, Joi Ito’s discussion in Wired (Link).
  • Which steps can governments, industries, or organizations (including ACM SIGAI) take to address these issues?  See, for example, the goals and outlines of the Partnership on AI (Link).

All sources must be cited. However, we are not interested in summaries of the opinions of others. Rather, we are interested in the informed opinions of the authors. Writing an essay on this topic requires some background knowledge. Possible starting points for acquiring such background knowledge are:

  • the revised ACM Code of Ethics (Link), especially Section 3.7, and a discussion of why the revision was necessary (Link),
  • IEEE’s Ethically Aligned Design (Link), and
  • the One Hundred Year Study on AI and Life in 2030 (Link).

ACM and ACM SIGAI

ACM brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. As the world’s largest computing society, ACM strengthens the profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM’s reach extends to every part of the globe, with more than half of its 100,000 members residing outside the U.S.  Its growing membership has led to Councils in Europe, India, and China, fostering networking opportunities that strengthen ties within and across countries and technical communities. Their actions enhance ACM’s ability to raise awareness of computing’s important technical, educational, and social issues around the world. See https://www.acm.org/ for more information.

ACM SIGAI brings together academic and industrial researchers, practitioners, software developers, end users, and students who are interested in AI. It promotes and supports the growth and application of AI principles and techniques throughout computing, sponsors or co-sponsors AI-related conferences, organizes the Career Network and Conference for early-stage AI researchers, sponsors recognized AI awards, supports AI journals, provides scholarships to its student members to attend conferences, and promotes AI education and publications through various forums and the ACM digital library. See https://sigai.acm.org/ for more information.

Format and Eligibility

The ACM SIGAI Student Essay Contest is open to all ACM SIGAI student members at the time of submission.  (If you are a student but not an ACM SIGAI member, you can join ACM SIGAI before submission for just US$ 11 at https://goo.gl/6kifV9 by selecting Option 1, even if you are not an ACM member.) Essays can be authored by one or more ACM SIGAI student members but each ACM SIGAI student member can (co-)author only one essay.

All authors must be SIGAI members at the time of submission.  All submissions not meeting this requirement will not be reviewed.

Essays should be submitted as pdf documents of any style with at most 5,000 words via email to https://easychair.org/conferences/?conf=acmsigai2018.

The deadline for submissions is January 10th, 2019.

We have extended the deadline to February 15th, 2019, Anywhere on Earth Time Zone.  Please get your submissions in!!

The authors certify with their submissions that they have followed the ACM publication policies on “Author Representations,” “Plagiarism” and “Criteria for Authorship” (http://www.acm.org/publications/policies/). They also certify with their submissions that they will transfer the copyright of winning essays to ACM.

Judges and Judging Criteria

Winning entries from last year’s essay contest can be found in recent issues of the ACM SIGAI newsletter “AI Matters,” specifically  Volume 3, Issue 3: http://sigai.acm.org/aimatters/3-3.html and  Volume 3, Issue 4: http://sigai.acm.org/aimatters/3-4.html.

Entries will be judged by the following panel of leading AI researchers and ACM SIGAI officers. Winning essays will be selected based on depth of insight, creativity, technical merit, and novelty of argument. All decisions by the judges are final.

    • Rediet Abebe, Cornell University
    • Emanuelle Burton, University of Illinois at Chicago
    • Sanmay Das, Washington University in St. Louis  
    • John P. Dickerson, University of Maryland
    • Virginia Dignum, Delft University of Technology
    • Tina Eliassi-Rad, Northeastern University
    • Judy Goldsmith, University of Kentucky
    • Amy Greenwald, Brown University
    • H. V. Jagadish, University of Michigan
    • Sven Koenig, University of Southern California  
    • Benjamin Kuipers, University of Michigan  
    • Nicholas Mattei, IBM Research
    • Alexandra Olteanu, Microsoft Research
    • Rosemary Paradis, Leidos
    • Kush Varshney, IBM Research
    • Roman Yampolskiy, University of Louisville
  • Yair Zick, National University of Singapore  

Prizes

All winning essays will be published in the ACM SIGAI newsletter “AI Matters.” ACM SIGAI provides five monetary awards of USD 500 each as well as 45-minute skype sessions with the following AI researchers:

    • Joanna Bryson, Reader (Assoc. Prof) in AI, University of Bath
    • Murray Campbell, Senior Manager, IBM Research AI
    • Eric Horvitz, Managing Director, Microsoft Research
    • Peter Norvig, Director of Research, Google
    • Iyad Rahwan, Associate Professor, MIT Media Lab and Head of Scalable Corp.
    • Francesca Rossi, AI and Ethics Global Lead, IBM Research AI
  • Toby Walsh, Scientia Professor of Artificial Intelligence, UNSW Sydney, Data61 and TU Berlin

One award is given per winning essay. Authors or teams of authors of winning essays will pick (in a pre-selected order) an available skype session or one of the monetary awards until all skype sessions and monetary awards have been claimed. ACM SIGAI reserves the right to substitute a skype session with a different AI researcher or a monetary award for a skype session in case an AI researcher becomes unexpectedly unavailable. Some prizes might not be awarded in case the number of high-quality submissions is smaller than the number of prizes.

Questions?

In case of questions, please first check the ACM SIGAI blog for announcements and clarifications: https://sigai.acm.org/aimatters/blog/. You can also contact the ACM SIGAI Student Essay Contest Organizers at sigai@member.acm.org.

  • Nicholas Mattei (IBM Research) – ACM SIGAI Student Essay Contest Organizer and AI and Society Officer

with involvement from

    • Sven Koenig (University of Southern California), ACM SIGAI Chair
    • Sanmay Das (Washington University in St. Louis), ACM SIGAI Vice Chair
    • Rosemary Paradis (Leidos), ACM SIGAI Secretary/Treasurer
    • Benjamin Kuipers (University of Michigan), ACM SIGAI Ethics Officer
  • Amy McGovern (University of Oklahoma), ACM SIGAI AI Matters Editor-in Chief

AI’s Role in Cancer Research

AI’s Role in Cancer Research

Guest Post by Anna Suarez

It’s no secret the general public has mixed views about artificial intelligence, largely stemming from a misunderstanding of the topic. In the public’s mind, AI tends to be equated to the creation of nearly lifelike robots and, although sometimes it is, there is much more to the rapidly-advancing technology.

In today’s society AI is playing an everyday role in the lives of most people, ranging from ride-hailing apps like Uber and Lyft to Facebook’s facial recognition, but the technology is constantly advancing. For example, Google’s new AI assistantis revolutionizing the way people go about their daily tasks, and recent advancements made possible by AI in healthcare are changing the way cancer research is approached.

Although the term “AI” dates back to the mid-1950s, it has become so sophisticated in recent years that a cure for cancer could be around the corner. Vice President Joe Biden’s Cancer Moonshot Initiative aims to find a cure for cancer and provide patients with more treatment options using AI technology to process and sort data from cancer researchers. In an attempt to reach its mission of driving 10 years’ worth of research in only five, AI is also being used to detect certain cancers earlier than what’s possible using other currently available diagnostic procedures.

The ability to diagnose certain cancers, including brain cancer, skin cancer and mesothelioma, through the use of this technology is, arguably, one of the most important advancements in healthcare as a result of AI. This is especially groundbreaking for patients battling mesothelioma, a rare cancer that develops in the mesotheliumof the lungs, heart or abdomen. Mesothelioma has a decades-long latency period and symptoms are often mistaken for those associated with more common ailments. Unfortunately, the disease has an average prognosis of 6-12 months and leaves patients with little time to coordinate treatment. Ultimately, the earlier detection and diagnosis of cancers may lead to better prognoses and outcomes.s

Using AI, the cloud and other tools, companies like IBM and Microsoft are attempting to change the way healthcare is approached. Following its creation in 2013, IBM’s supercomputer, Watson, successfully won $1 million in a game of Jeopardy!against two of the show’s most successful contestants, and is making attempts to streamline the process of diagnosing diseases in patients more efficiently.

Although IBM Watson hasn’t made as much progress as anticipated, the technology was proven capable during a 2017 study. The research monitored the amount of time it took Watson to create a treatment plan versus the amount of time it took doctors. The results showed that Watson was able to create a plan of treatment for a brain cancer patient in 10 minutes, while the process took between 160 hours for doctors.

However, the study wasn’t a complete win for Watson. Comparatively, Watson’s suggested plan of action was short sighted due to an inefficiency to consider multiple treatment options. While doctors were able to consider several possibilities at once, Watson could not.

Healthcare NExT, Microsoft’s internal initiative announced in 2017, is also focusing on using technology to find solutions to questions in the healthcare industry, including a cure for cancer.

In a Microsoft blog post about Healthcare NExT, Peter Lee, Corporate Vice President of Microsoft AI + Research says, Microsoft is “expanding [its] commitment to building a healthier future with new initiatives and solutions, making it easier for health industry partners and organizations to use intelligent technology to improve the lives of people around the world.”

Technology has changed dramatically within the past decade. Newly-developed diagnostic methods and an advanced approach to healthcare that we could have only dreamed of in the past are changing the way we see the world today. Although machines are still learning and there is a lot of room to improve, the work that’s been done in such a short period of time is nothing short of incredible.

Are we closer to a cure for cancer than even we know?

Resources:

https://www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-to-know-about-sophia-the-worlds-first-robot-citizen/#74e645b846fa

https://www.theverge.com/2018/5/8/17332070/google-assistant-makes-phone-call-demo-duplex-io-2018

https://www.maacenter.org/mesothelioma/

https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/ibm-watson-makes-treatment-plan-for-brain-cancer-patient-in-10-minutes-doctors-take-160-hours

https://blogs.microsoft.com/blog/2017/02/16/microsoft-partners-combine-cloud-ai-research-industry-expertise-focus-transforming-health-care/

ACM/SIGAI Autonomous Agents Research Award 2018: Craig Boutilier

The selection committee for the ACM/SIGAI Autonomous Agents Research Award is pleased to announce that Dr. Craig Boutilier, Principal Research Scientist at Google, is the recipient of the 2018 award. Over the years, Dr. Boutilier has made seminal contributions to research on decision-making under uncertainty, game theory, and computational social choice. He is a pioneer in applying decision-theoretic concepts in novel ways in a variety of domains including (single- and multi-agent) planning and reinforcement learning, preference elicitation, voting, matching, facility location, and recommender systems. His recent research continues to significantly influence the field of computational social choice through the novel computational and methodological tools he introduced and his focus on modeling realistic preferences. In addition to his reputation for outstanding research, Dr. Boutilier is also recognized as an exceptional teacher and mentor.