2018 ACM SIGAI Student Essay Contest on Artificial Intelligence Technologies

After the success of our 2017 version of the contest we are happy to announce another round of the ACM SIGAI Student Essay Contest on Artificial Intelligence Technologies!

Download a PDF of the call here: https://tinyurl.com/SIGAIEssay2018

Win one of several $500 monetary prizes or a Skype conversation with a leading AI researcher including Joanna Bryson, Murray Campbell, Eric Horvitz, Peter Norvig, Iyad Rahwan, Francesca Rossi, or Toby Walsh.

2018 Topic

The ACM Special Interest Group on Artificial Intelligence (ACM SIGAI) supports the development and responsible application of Artificial Intelligence (AI) technologies. From intelligent assistants to self-driving cars, an increasing number of AI technologies now (or soon will) affect our lives. Examples include Google Duplex (Link) talking to humans, Drive.ai (Link) offering rides in US cities, chatbots advertising movies by impersonating people (Link), and AI systems making decisions about parole (Link) and foster care (Link). We interact with AI systems, whether we know it or not, every day.

Such interactions raise important questions. ACM SIGAI is in a unique position to shape the conversation around these and related issues and is thus interested in obtaining input from students worldwide to help shape the debate. We therefore invite all students to enter an essay in the 2018 ACM SIGAI Student Essay Contest, to be published in the ACM SIGAI newsletter “AI Matters,” addressing one or both of the following topic areas (or any other question in this space that you feel is important) while providing supporting evidence:

  • What requirements, if any, should be imposed on AI systems and technology when interacting with humans who may or may not know that they are interacting with a machine?  For example, should they be required to disclose their identities? If so, how? See, for example, “Turing’s Red Flag” in CACM (Link).
  • What requirements, if any, should be imposed on AI systems and technology when making decisions that directly affect humans? For example, should they be required to make transparent decisions? If so, how?  See, for example, the IEEE’s summary discussion of Ethically Aligned Design (Link).

Each of the above topic areas raises further questions, including

  • Who is responsible for the training and maintenance of AI systems? See, for example, Google’s (Link), Microsoft’s (Link), and IBM’s (Link) AI Principles.
  • How do we educate ourselves and others about these issues and possible solutions? See, for example, new ways of teaching AI ethics (Link).
  • How do we handle the fact that different cultures see these problems differently?  See, for example, Joi Ito’s discussion in Wired (Link).
  • Which steps can governments, industries, or organizations (including ACM SIGAI) take to address these issues?  See, for example, the goals and outlines of the Partnership on AI (Link).

All sources must be cited. However, we are not interested in summaries of the opinions of others. Rather, we are interested in the informed opinions of the authors. Writing an essay on this topic requires some background knowledge. Possible starting points for acquiring such background knowledge are:

  • the revised ACM Code of Ethics (Link), especially Section 3.7, and a discussion of why the revision was necessary (Link),
  • IEEE’s Ethically Aligned Design (Link), and
  • the One Hundred Year Study on AI and Life in 2030 (Link).

ACM and ACM SIGAI

ACM brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. As the world’s largest computing society, ACM strengthens the profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM’s reach extends to every part of the globe, with more than half of its 100,000 members residing outside the U.S.  Its growing membership has led to Councils in Europe, India, and China, fostering networking opportunities that strengthen ties within and across countries and technical communities. Their actions enhance ACM’s ability to raise awareness of computing’s important technical, educational, and social issues around the world. See https://www.acm.org/ for more information.

ACM SIGAI brings together academic and industrial researchers, practitioners, software developers, end users, and students who are interested in AI. It promotes and supports the growth and application of AI principles and techniques throughout computing, sponsors or co-sponsors AI-related conferences, organizes the Career Network and Conference for early-stage AI researchers, sponsors recognized AI awards, supports AI journals, provides scholarships to its student members to attend conferences, and promotes AI education and publications through various forums and the ACM digital library. See https://sigai.acm.org/ for more information.

Format and Eligibility

The ACM SIGAI Student Essay Contest is open to all ACM SIGAI student members at the time of submission.  (If you are a student but not an ACM SIGAI member, you can join ACM SIGAI before submission for just US$ 11 at https://goo.gl/6kifV9 by selecting Option 1, even if you are not an ACM member.) Essays can be authored by one or more ACM SIGAI student members but each ACM SIGAI student member can (co-)author only one essay.

All authors must be SIGAI members at the time of submission.  All submissions not meeting this requirement will not be reviewed.

Essays should be submitted as pdf documents of any style with at most 5,000 words via email to https://easychair.org/conferences/?conf=acmsigai2018.

The deadline for submissions is January 10th, 2019.

The authors certify with their submissions that they have followed the ACM publication policies on “Author Representations,” “Plagiarism” and “Criteria for Authorship” (http://www.acm.org/publications/policies/). They also certify with their submissions that they will transfer the copyright of winning essays to ACM.

Judges and Judging Criteria

Winning entries from last year’s essay contest can be found in recent issues of the ACM SIGAI newsletter “AI Matters,” specifically  Volume 3, Issue 3: http://sigai.acm.org/aimatters/3-3.html and  Volume 3, Issue 4: http://sigai.acm.org/aimatters/3-4.html.

Entries will be judged by the following panel of leading AI researchers and ACM SIGAI officers. Winning essays will be selected based on depth of insight, creativity, technical merit, and novelty of argument. All decisions by the judges are final.

  • Rediet Abebe, Cornell University
  • Emanuelle Burton, University of Illinois at Chicago
  • Sanmay Das, Washington University in St. Louis  
  • John P. Dickerson, University of Maryland
  • Virginia Dignum, Delft University of Technology
  • Tina Eliassi-Rad, Northeastern University
  • Judy Goldsmith, University of Kentucky
  • Amy Greenwald, Brown University
  • H. V. Jagadish, University of Michigan
  • Sven Koenig, University of Southern California  
  • Benjamin Kuipers, University of Michigan  
  • Nicholas Mattei, IBM Research
  • Alexandra Olteanu, Microsoft Research
  • Rosemary Paradis, Leidos
  • Kush Varshney, IBM Research
  • Roman Yampolskiy, University of Louisville
  • Yair Zick, National University of Singapore  

Prizes

All winning essays will be published in the ACM SIGAI newsletter “AI Matters.” ACM SIGAI provides five monetary awards of USD 500 each as well as 45-minute skype sessions with the following AI researchers:

  • Joanna Bryson, Reader (Assoc. Prof) in AI, University of Bath
  • Murray Campbell, Senior Manager, IBM Research AI
  • Eric Horvitz, Managing Director, Microsoft Research
  • Peter Norvig, Director of Research, Google
  • Iyad Rahwan, Associate Professor, MIT Media Lab and Head of Scalable Corp.
  • Francesca Rossi, AI and Ethics Global Lead, IBM Research AI
  • Toby Walsh, Scientia Professor of Artificial Intelligence, UNSW Sydney, Data61 and TU Berlin

One award is given per winning essay. Authors or teams of authors of winning essays will pick (in a pre-selected order) an available skype session or one of the monetary awards until all skype sessions and monetary awards have been claimed. ACM SIGAI reserves the right to substitute a skype session with a different AI researcher or a monetary award for a skype session in case an AI researcher becomes unexpectedly unavailable. Some prizes might not be awarded in case the number of high-quality submissions is smaller than the number of prizes.

Questions?

In case of questions, please first check the ACM SIGAI blog for announcements and clarifications: https://sigai.acm.org/aimatters/blog/. You can also contact the ACM SIGAI Student Essay Contest Organizers at sigai@member.acm.org.

  • Nicholas Mattei (IBM Research) – ACM SIGAI Student Essay Contest Organizer and AI and Society Officer

with involvement from

  • Sven Koenig (University of Southern California), ACM SIGAI Chair
  • Sanmay Das (Washington University in St. Louis), ACM SIGAI Vice Chair
  • Rosemary Paradis (Leidos), ACM SIGAI Secretary/Treasurer
  • Benjamin Kuipers (University of Michigan), ACM SIGAI Ethics Officer
  • Amy McGovern (University of Oklahoma), ACM SIGAI AI Matters Editor-in Chief

 

WEF Report on the Future of Jobs

The World Economic Forum recently released a report on the future of jobs. Their analyses refer to the Fourth Industrial Revolution and their Centre for the Fourth Industrial Revolution.
The report states that
“The Fourth Industrial Revolution is interacting with other socio-economic and demographic factors to create a perfect storm of business model change in all industries, resulting in major disruptions to labour markets. New categories of jobs will emerge, partly or wholly displacing others. The skill sets required in both old and new occupations will change in most industries and transform how and where people work. It may also affect female and male workers differently and transform the dynamics of the industry gender gap.
The Future of Jobs Report aims to unpack and provide specific information on the relative magnitude of these trends by industry and geography, and on the expected time horizon for their impact to be felt on job functions, employment levels and skills.”

The report concludes that by 2022 more jobs can be created than the number lost but that various stakeholders, including those making education policy, must make wise decisions.

Vehicle automation: safe design, scientific advances, and smart policy

Following previous policy posts on terminology and popular discourse about AI, the focus today is on the impact on policy of the way we talk about automation. “Unmanned Autonomous Vehicle (UAV)” is a term that justifiably creates fear in the general public, but talk about a UAV usually misses the roles of humans and human decision making. Likewise, discussions about an “automated decision maker (ADM)” ignores the social and legal responsibility of those who design, manufacture, implement, and operate “autonomous” systems. The AI community has an important role to influence correct and realistic use of concepts and issues in discussions of science and technology systems that increase automation. The concept “hybrid system” might be helpful here for understanding the potential and limitations of combinations of technologies – and humans – in AI and Autonomous Systems (AI/AS) requiring less from humans over time.

Safe Design

In addition to avoiding confusion and managing expectations, design approaches and analyses of the performance of existing systems with automation are crucial to developing safe systems with which the public and policymakers can feel comfortable. In this regard, stakeholders should read information on design of systems with automation components, such as the IEEE report “Ethically Aligned Design”, which is subtitled “A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems”. The report says about AI and Autonomous Systems (AI/AS) , “We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles. AI/AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between humans and our technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.” See also Ben Shneiderman’s excellent summary and comments on the report as well as the YouTube video of his Turing Institute Lecture on “Algorithmic Accountability: Design for Safety”. See also his proposal for a National Algorithms Safety Board.

Advances in AI/AS Science and Technology

Another perspective on the automation issue is the need to increase safety of systems through advances in science and technology. In a future blog, we will present the transcript of an interview with Dr. Harold Szu, about the need for a next generation of AI that moves closer to brain-style computing that incorporates human behaviors into AI/AS systems. Dr. Szu was the founder and former president, and former governor, of the International Neural Network Society. He is acknowledged for outstanding contributions to ANN applications and scientific innovations.

Policy and Ethics

Over the summer 2018, increased activity in congress and state legislatures  focused on understandings, accurate and not, of “unmanned autonomous vehicles” and what policies should be in place. The following examples are interesting for possible interventions, but also for the use of AI/AS terminology:

House Energy & Commerce Committee’s press release: the SELF DRIVE Act.
CNBC Commentary by Reps. Bob Latta (R-OH) and Jan Schakowsky (D-IL).

Politico, 08/03/2018.: “Trial lawyers speak out on Senate self-driving car bill”, by Brianna Gurciullo with help from Lauren Gardner.
“AV NON-STARTER: After being mum for months, the American Association for Justice said publicly Thursday that it has been pressing for the Senate’s self-driving car bill, S. 1885 (115) (definitions on p.42), to stipulate that companies can’t force arbitration, our Tanya Snyder reports for Pros. The trial lawyers group is calling for a provision to make sure ‘when a person, whether a passenger or pedestrian, is injured or killed by a driverless car, that person or their family is not forced into a secret arbitration proceeding,’ according to a statement. Senate Commerce Chairman John Thune (R-S.D.) has said that arbitration has been ‘a thorny spot’ in bill negotiations.”

Privacy Challenges for Election Policies

A CBS/AP article discusses difficulty of social media efforts to prevent meddling in U.S. elections: “Facebook is spending heavily to prevent a repeat of the Russian interference that played out on its service in 2016. The social-media giant is bringing on thousands of human moderators and advanced artificial intelligence systems to weed out fake accounts and foreign propaganda campaigns.”

ACM Code of Ethics and USACM’s New Name

ACM Code of Ethics
Please note the message from ACM Headquarters and check the link below: “On Tuesday, July 17, ACM plans to announce the updated Code of Ethics and Professional Conduct. We would like your support in helping to reach as broad an audience of computing professionals as possible with this news. When the updated Code goes live at 10 a.m. EDT on July 17, it will be hosted at https://www.acm.org/code-of-ethics.
We encourage you to share the updated Code with your friends and colleagues at that time. If you use social media, please take part in the conversation around computing ethics using the hashtags #ACMCodeOfEthics and #IReadTheCode. And if you are not doing so already, please follow the @TheOfficialACM and @ACM_Ethics Twitter handles to share and engage with posts about the Code.  ACM also plans to host a Reddit AMA and Twitter chats on computing ethics in the weeks following this announcement. We will reach out to you again regarding these events when their details have been solidified.
Thank you in advance for helping to support and increase awareness of the ACM Code of Ethics and for promoting ethical conduct among computing professionals around the world.”

News From the ACM US Technology Policy Committee
The USACM has a new name. Please note the change and remember that SIGAI will continue to have a close relationship with the ACM US Technology Policy Committee. Here is a reminder of the purpose and goals: “The ACM US Technology Policy Committee is a leading independent and nonpartisan voice in addressing US public policy issues related to computing and information technology. The Committee regularly educates and informs Congress, the Administration, and the courts about significant developments in the computing field and how those developments affect public policy in the United States. The Committee provides guidance and expertise in varied areas, including algorithmic accountability, artificial intelligence, big data and analytics, privacy, security, accessibility, digital governance, intellectual property, voting systems, and tech law. As the internet is global, the ACM US Technology Policy Committee works with the other ACM policy entities on publications and projects related to cross-border issues, such as cybersecurity, encryption, cloud computing, the Internet of Things, and internet governance.”

The ACM US Technology Policy Committee’s New Leadership
ACM has named Prof. Jim Hendler as the new Chair of the ACM U.S. Technology Policy Committee (formerly USACM) under the new ACM Technology Policy Council. In addition to being a distinguished computer science professor at RPI, Jim has long been an active USACM member and has served as both a committee chair and as an at-large representative. This is a great choice to guide USACM into the future within ACM’s new technology policy structure. Please add individually to the SIGAI Public Policy congratulations to Jim. Our congratulations and appreciation go to outgoing Chair Stuart Shapiro for his outstanding leadership of USACM.

AI’s Role in Cancer Research

AI’s Role in Cancer Research

Guest Post by Anna Suarez

It’s no secret the general public has mixed views about artificial intelligence, largely stemming from a misunderstanding of the topic. In the public’s mind, AI tends to be equated to the creation of nearly lifelike robots and, although sometimes it is, there is much more to the rapidly-advancing technology.

In today’s society AI is playing an everyday role in the lives of most people, ranging from ride-hailing apps like Uber and Lyft to Facebook’s facial recognition, but the technology is constantly advancing. For example, Google’s new AI assistantis revolutionizing the way people go about their daily tasks, and recent advancements made possible by AI in healthcare are changing the way cancer research is approached.

Although the term “AI” dates back to the mid-1950s, it has become so sophisticated in recent years that a cure for cancer could be around the corner. Vice President Joe Biden’s Cancer Moonshot Initiative aims to find a cure for cancer and provide patients with more treatment options using AI technology to process and sort data from cancer researchers. In an attempt to reach its mission of driving 10 years’ worth of research in only five, AI is also being used to detect certain cancers earlier than what’s possible using other currently available diagnostic procedures.

The ability to diagnose certain cancers, including brain cancer, skin cancer and mesothelioma, through the use of this technology is, arguably, one of the most important advancements in healthcare as a result of AI. This is especially groundbreaking for patients battling mesothelioma, a rare cancer that develops in the mesotheliumof the lungs, heart or abdomen. Mesothelioma has a decades-long latency period and symptoms are often mistaken for those associated with more common ailments. Unfortunately, the disease has an average prognosis of 6-12 months and leaves patients with little time to coordinate treatment. Ultimately, the earlier detection and diagnosis of cancers may lead to better prognoses and outcomes.s

Using AI, the cloud and other tools, companies like IBM and Microsoft are attempting to change the way healthcare is approached. Following its creation in 2013, IBM’s supercomputer, Watson, successfully won $1 million in a game of Jeopardy!against two of the show’s most successful contestants, and is making attempts to streamline the process of diagnosing diseases in patients more efficiently.

Although IBM Watson hasn’t made as much progress as anticipated, the technology was proven capable during a 2017 study. The research monitored the amount of time it took Watson to create a treatment plan versus the amount of time it took doctors. The results showed that Watson was able to create a plan of treatment for a brain cancer patient in 10 minutes, while the process took between 160 hours for doctors.

However, the study wasn’t a complete win for Watson. Comparatively, Watson’s suggested plan of action was short sighted due to an inefficiency to consider multiple treatment options. While doctors were able to consider several possibilities at once, Watson could not.

Healthcare NExT, Microsoft’s internal initiative announced in 2017, is also focusing on using technology to find solutions to questions in the healthcare industry, including a cure for cancer.

In a Microsoft blog post about Healthcare NExT, Peter Lee, Corporate Vice President of Microsoft AI + Research says, Microsoft is “expanding [its] commitment to building a healthier future with new initiatives and solutions, making it easier for health industry partners and organizations to use intelligent technology to improve the lives of people around the world.”

Technology has changed dramatically within the past decade. Newly-developed diagnostic methods and an advanced approach to healthcare that we could have only dreamed of in the past are changing the way we see the world today. Although machines are still learning and there is a lot of room to improve, the work that’s been done in such a short period of time is nothing short of incredible.

Are we closer to a cure for cancer than even we know?

Resources:

https://www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-to-know-about-sophia-the-worlds-first-robot-citizen/#74e645b846fa

https://www.theverge.com/2018/5/8/17332070/google-assistant-makes-phone-call-demo-duplex-io-2018

https://www.maacenter.org/mesothelioma/

https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/ibm-watson-makes-treatment-plan-for-brain-cancer-patient-in-10-minutes-doctors-take-160-hours

https://blogs.microsoft.com/blog/2017/02/16/microsoft-partners-combine-cloud-ai-research-industry-expertise-focus-transforming-health-care/

White House OSTP Petition

USACM and the Electronic Privacy Information Center (EPIC) have teamed up to petition the White House’s Office of Science and Technology Policy to construct and publicize a formal process by which the public might have input into the work of the recently-named Select Committee on Artificial Intelligence. Several associations and currently about 75 individual professionals, many ACM members, have signed on to the letter. You may have received an email message abut this recently from SIGAI.

The petition states that “The undersigned technical experts, legal scholars, and affiliated organizations formally request that the Office of Science and Technology Policy (OSTP) undertake a Request for Information (RFI) and solicit public comments so as to encourage meaningful public participation in the development of the nation s policy for Artificial Intelligence. This request follows from the recent establishment of a Select Committee on Artificial Intelligence and a similar OSTP RFI that occurred in 2016.”

Any technical expert with a relevant background, irrespective of ACM affiliation, who is interested in signing the letter should e-mail Jeramie Scott <jscott@epic.org> and Adam Eisgrau <eisgrau@hq.acm.org> as soon as possible. A goal is to have 100 individual signers on the letter, and the organizers hope to send the petition to the White House shortly after the July 4th holiday. If you would like to be added to the letter, send your information providing your name, title, and what school, company, or other affiliation (for ID purposes only) that you would like listed.

Data Privacy

Data Privacy Policy – ACM and SIGAI Emerging Issue

An issue recently raised involves the data privacy of SIGAI and ACM members using EasyChair to submit articles for publication, including the AI Matters Newsletter. As part of entering a new submission through EasyChair, the following message appears:
“AI Matters, 2014-present, is an ACM conference. The age and gender fields are added by ACM. By providing the information requested, you will help ACM to better understand where it stands in terms of diversity to be able to focus on areas of improvement.
It is mandatory for the submitting author (but you can select “prefer not to submit”) and it is desirable that you fill it out for all authors.
This information will be deleted from EasyChair after the conference.”

To evaluate the likelihood of privacy protection, one should pay attention to the EasyChair Terms of Service, particularly Section 6 “Use of Personal Information”. More investigation may allow a better assessment of the level of risk if our members choose to enter personal information. Your Public Policy Officer is working with the other SIGAI officers to clarify the issues and make recommendations for possible changes in ACM policy.

Please send your views on this topic to SIGAI and contribute comments to this Blog.

Policy News Matters

At their annual meeting this week, the American Medical Association produced a statement “AMA Passes First Policy Recommendations on Augmented Intelligence”, adopting broad policy recommendations for health and technology stakeholders. The statement quotes AMA Board Member Jesse M. Ehrenfeld as follows: “As technology continues to advance and evolve, we have a unique opportunity to ensure that augmented intelligence is used to benefit patients, physicians, and the broad health care community. Combining AI methods and systems with an irreplaceable human clinician can advance the delivery of care in a way that outperforms what either can do alone. But we must forthrightly address challenges in the design, evaluation and implementation as this technology is increasingly integrated into physicians’ delivery of care to patients.”

AI Terminology Matters

In the daily news and social media, AI terms are part of the popular lexicon for better or for worse. AI technology is both praised and feared in different corners of society. Big data practitioners and even educators add confusion by misusing AI terms and concepts.

“Algorithm” and “machine learning” may be the most prevalent terms that are picked up in the popular dialogue, including in the important fields of ethics and policy. The ACM and SIGAI could have a critical educational role in the public sphere. In the area of policy, the correct use of AI terms and concepts is important for establishing credibility with the scientific community and for creating policy that addresses the real problems.

In recent weeks, interesting articles have appeared by writers diverse in the degree of scientific expertise. A June issue of The Atlantic has an article by Henry Kissinger entitled “How the Enlightenment Ends” with the thesis that society is not prepared for AI. While some of the understanding of AI concepts can be questioned, the conclusion is reasonable: “AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.”

In May, The Atlantic had an article about the other extreme of scientific knowledge by Kevin Hartnett entitled “How a Pioneer of Machine Learning Became One of Its Sharpest Critics”. He writes about an interview with Judea Pearl about his current thinking, with Dana Mackenzie, in The Book of Why: The New Science of Cause and Effect. The interview includes a criticism of deep learning research and the need for a more fundamental approach.

Back to policy, I recently attended a DC event of the Center for Data Innovation on a proposed policy framework to create accountability in the use of algorithms. They have a report on the same topic. The event was another reminder of the diverse groups in dialogue in the public sphere on critical issues for AI and the need to bring together the policymakers and the scientific community. SIGAI can have a big role to play.

White House AI Summit

Updates and Reminders

AAAS Forum on Science & Technology Policy, Washington, D.C., June 21 – 22, 2018.

Potential revival of OTA progress from the House appropriations subcommittee:
“Technology Assessment Study: The Committee has heard testimony on, and received dozens of requests advocating for restoring funding to the Office of Technology Assessment (OTA).

White House new artificial intelligence advisory committee


White House 2018 Summit on AI for American Industry

Background from the report:

“Artificial intelligence (AI) has tremendous potential to benefit the American people, and has already demonstrated immense value in enhancing our national security and growing our economy.

AI is quickly transforming American life and American business, improving how we diagnose and treat illnesses, grow our food, manufacture and deliver new products, manage our finances, power our homes, and travel from point A to point B.

On May 10, 2018, the White House hosted the Artificial Intelligence for American Industry summit, to discuss the promise of AI and the policies we will need to realize that promise for the American people and maintain U.S. leadership in the age of artificial intelligence.

‘Artificial intelligence holds tremendous potential as a tool to empower the American worker, drive growth in American industry, and improve the lives of the American people. Our free market approach to scientific discovery harnesses the combined strengths of government, industry, and academia, and uniquely positions us to leverage this technology for the betterment of our great nation.’
– Michael Kratsios, Deputy Assistant to the President for Technology Policy

The summit brought together over 100 senior government officials, technical experts from top academic institutions, heads of industrial research labs, and American business leaders who are adopting AI technologies to benefit their customers, workers, and shareholders.”

Issues addressed at the 2018 summit are as follows:

  • Support for the national AI R&D ecosystem – “free market approach to scientific discovery that harnesses the combined strengths of government, industry, and academia.”
  • American workforce that can take full advantage of the benefits of AI – “new types of jobs and demand for new technical skills across industries … efforts to prepare America for the jobs of the future, from a renewed focus on STEM education throughout childhood and beyond, to technical apprenticeships, re-skilling, and lifelong learning programs to better match America’s skills with the needs of industry.”
  • Barriers to AI innovation in the United States – included “need to promote awareness of AI so that the public can better understand how these technologies work and how they can benefit our daily lives.”
  • High-impact, sector-specific applications of AI – “novel ways industry leaders are using AI technologies to empower the American workforce, grow their businesses, and better serve their customers.”

See details in the Summary of the 2018 White House Summit on AI for American Industry