Events and Announcements

AAAI Policy Initiative

AAAI has established a new mailing list on US Policy that will focus exclusively on the discussion of US policy matters related to artificial intelligence. All members and affiliates are invited to join the list at https://aaai.org/Organization/mailing-lists.php

Participants will have the opportunity to subscribe or unsubscribe at any time. The mailing list will be moderated, and all posts will be approved before dissemination. This is a great opportunity for another productive partnership between AAAI and SIGAI policy work.

EPIC Panel on June 5th

A panel on AI, Human Rights, and US policy, will be hosted by the Electronic Privacy Information Center (EPIC) at their annual meeting (and celebration of 25th anniversary) on June 5, 2019, at the National Press Club in DC.  Our Lorraine Kisselburgh will join Harry Lewis (Harvard), Sherry Turkle (MIT), Lynne Parker (UTenn and White House OSTP director for AI), Sarah Box (OECD), and Bilyana Petkova (EPIC and Maastricht) to discuss AI policy directions for the US.  The event is free and open to the public. You can register at https://epic.org/events/June5AIpanel/

2019 ACM SIGAI Election Reminder

Please remember to vote and to review the information on http://www.acm.org/elections/sigs/voting-page. Please note that 16:00 UTC, 14 June 2019 is the deadline for submitting your vote. To access the secure voting site, you will enter your email address (the one associated with your ACM/SIG member record) to reach the menu of active SIG elections for which you are eligible. In the online menu, select your Special Interest Group and enter the 10-digit Unique Pin.

AI Research Roadmap

The Computing Community Consortium (CCC) is requesting comments on the draft of A 20-Year Community Roadmap for AI Research in the US. Please submit your comments here by May 28, 2019. See the AI Roadmap Website for more information. 

Here is a link to the whole report and links to individual sections:

     Title Page, Executive Summary, and Table of Contents 

  1. Introduction
  2. Major Societal Drivers for Future Artificial Intelligence Research 
  3. Overview of Core Technical Areas of AI Research Roadmap: Workshop Reports 
    1. Workshop I: A Research Roadmap for Integrated Intelligence 
    2. Workshop II: A Research Roadmap for Meaningful Interaction 
    3. Workshop III: A Research Roadmap for Self-Aware Learning 
  4. Major Findings 
  5. Recommendations
  6. Conclusions

Appendices (participants and contributors)

AI Hype Not

A recent item in Science|Business “Artificial intelligence nowhere near the real thing, says German AI chief”, by Éanna Kelly, gives policy-worthy warnings and ideas. “In his 20 years as head of Germany’s biggest AI research lab Wolfgang Wahlster has seen the tech hype machine splutter three times. As he hands over to a new CEO, he warns colleagues: ‘Don’t over-promise’.the computer scientist who has just ended a 20 year stint as CEO of the German Research Centre for Artificial Intelligence says that [warning] greatly underestimates the distance between AI and its human counterpart: ‘We’re years away from a game changer in the field. I always warn people, one should be a bit careful with what they claim. Every day you work on AI, you see the big gap between human intelligence and AI’, Wahlster told Science|Business.”

For AI policy, we should remember to look out for over promising, but we also need to be mindful of the time frame for making effective policy and be fully engaged now. Our effort importantly informs policymakers about the real opportunities to make AI successful.  A recent article in The Conversation by Ben Shneiderman “What alchemy and astrology can teach artificial intelligence researchers,” gives insightful information and advice on how to avoid being distracted away “… from where the real progress is already happening: in systems that enhance – rather than replace – human capabilities.” Shneiderman recommends that technology designers shift “from trying to replace or simulate human behavior in machines to building wildly successful applications that people love to use.”

AAII and EAAI

President Trump issued an Executive Order on February 11, 2019, entitled “Maintaining American Leadership In Artificial Intelligence”. The full text is at https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/The American AI Initiative of course needs analysis and implementation details. Two sections of the Executive Order give hope for opportunities to provide public input:

Sec (5)(a)(1)(i): Within 90 days of the date of this order, the OMB Director shall publish a notice in the Federal Register inviting the public to identify additional requests for access or quality improvements for Federal data and models that would improve AI R&D and testing. …[T]hese actions by OMB will help to identify datasets that will facilitate non-Federal AI R&D and testing.
and
Sec (6)(b)
(b)  To help ensure public trust in the development and implementation of AI applications, OMB shall issue a draft version of the memorandum for public comment before it is finalized.
Please stay tuned for ways that our ACM US Technology Policy Committee (USTPC) can help us provide our feedback on the implementation of the Executive Order.

A summary and analysis report is available from the Center for Data Innovation: Executive Order Will Help Ensure U.S. Leadership in AI. They comment that the administration “needs to do more than reprogram existing funds for AI research, skill development, and infrastructure development” and “should ask Congress for significant funding increases to (a) expand these research efforts;
(b) implement light-touch regulation for AI;
(c) resist calls to implement roadblocks or speed bumps for this technology, including export restrictions;
(d) rapidly expand adoption of AI within government,
implement comprehensive reforms to the nation’s workforce training and adjustment policies.”

The latter point was a topic in my invited talk at EAAI-19. Opportunities and innovation in education and training for the workforce of the future rely crucially on public policymaking about workers in the era of increasing use of AI and other automation technologies. An important issue is who will provide training that is timely (by 2030), practical, and affordable for workers who are impacted by job disruptions and transitioning to the new predicted post-automation jobs. The stakeholders along with workers are the schools, employers, unions, community groups, and others. Even if more jobs are created than lost, work in the AI future is disproportionately available in the range of people in the current and near-future workforce.

Section 1 of the Executive Order “Maintaining American Leadership In Artificial Intelligence” follows:
Section 1.  Policy and Principles.Artificial Intelligence (AI) promises to drive growth of the United States economy, enhance our economic and national security, and improve our quality of life. The United States is the world leader in AI research and development (R&D) and deployment.  Continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with our Nation’s values, policies, and priorities.  The Federal Government plays an important role in facilitating AI R&D, promoting the trust of the American people in the development and deployment of AI-related technologies, training a workforce capable of using AI in their occupations, and protecting the American AI technology base from attempted acquisition by strategic competitors and adversarial nations.  Maintaining American leadership in AI requires a concerted effort to promote advancements in technology and innovation, while protecting American technology, economic and national security, civil liberties, privacy, and American values and enhancing international and industry collaboration with foreign partners and allies.  It is the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy, the American AI Initiative (Initiative), guided by five principles:
(a)  The United States must drive technological breakthroughs in AI across the Federal Government, industry, and academia in order to promote scientific discovery, economic competitiveness, and national security.
(b)  The United States must drive development of appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by today’s industries.
(c)  The United States must train current and future generations of American workers with the skills to develop and apply AI technologies to prepare them for today’s economy and jobs of the future.
(d)  The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.
(e)  The United States must promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting our technological advantage in AI and protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.

FTC Hearing on AI and Algorithms

FTC Hearing on AI and Algorithms: November 13 and 14 in Washington, DC

From the FTC:  The hearing will examine competition and consumer protection issues associated with the use of algorithms, artificial intelligence, and predictive analytics in business decisions and conduct. See detailed agenda. The record of that proceeding will be open until mid-February. To further its consideration of these issues, the agency seeks public comment on the questions, and it welcomes input on other related topics not specifically listed in the 25 questions.

Please send your thoughts to lrm@gwu.edu on what SIGAI might submit in response to the 25 specific questions posed by the Commission. See below. The hearing will inform the FTC, other policymakers, and the public of
* the current and potential uses of these technologies;
* the ethical and consumer protection issues that are associated with the use of these technologies;
* how the competitive dynamics of firm and industry conduct are affected by the use of these technologies; and
* policy, innovation, and market considerations associated with the use of these technologies.

25 specific questions posed by the FTC

Background on Algorithms, Artificial Intelligence, and Predictive Analytics, and Applications of the Technologies

  1. What features distinguish products or services that use algorithms, artificial intelligence, or predictive analytics? In which industries or business sectors are they most prevalent?
  2. What factors have facilitated the development or advancement of these technologies? What types of resources were involved (e.g., human capital, financial, other)?
  3. Are there factors that have impeded the development of these technologies? Are there factors that could impede further development of these technologies?
  4. What are the advantages and disadvantages for consumers and for businesses of utilizing products or services facilitated by algorithms, artificial intelligence, or predictive analytics?
  5. From a technical perspective, is it sometimes impossible to ascertain the basis for a result produced by these technologies? If so, what concerns does this raise?
  6. What are the advantages and disadvantages of developing technologies for which the basis for the results can or cannot be determined? What criteria should determine when a “black box” system is acceptable, or when a result should be explainable?

Common Principles and Ethics in the Development and Use of Algorithms, Artificial Intelligence, and Predictive Analytics

  1. What are the main ethical issues (e.g., susceptibility to bias) associated with these technologies? How are the relevant affected parties (e.g., technologists, the business community, government, consumer groups, etc.) proposing to address these ethical issues? What challenges might arise in addressing them?
  2. Are there ethical concerns raised by these technologies that are not also raised by traditional computer programming techniques or by human decision-making? Are the concerns raised by these technologies greater or less than those of traditional computer programming or human decision-making? Why or why not?
  3. Is industry self-regulation and government enforcement of existing laws sufficient to address concerns, or are new laws or regulations necessary?
  4. Should ethical guidelines and common principles be tailored to the type of technology involved, or should the goal be to develop one overarching set of best practices?

Consumer Protection Issues Related to Algorithms, Artificial Intelligence, and Predictive Analytics

  1. What are the main consumer protection issues raised by algorithms, artificial intelligence, and predictive analytics?
  2. How well do the FTC’s current enforcement tools, including the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act, address issues raised by these technologies?
  3. In recent years, the FTC has held public forums to examine the consumer protection questions raised by artificial intelligence as used in certain contexts (e.g., the 2017 FinTech Forum on artificial intelligence and blockchain and the 2011 Face Facts Forum on facial recognition technology). Since those events, have technological advancements, or the increased prevalence of certain technologies, raised new or increased consumer protection concerns?
  4. What roles should explainability, risk management, and human control play in the implementation of these technologies?
  5. What choices and notice should consumers have regarding the use of these technologies?
  6. What educational role should the FTC play with respect to these technologies? What would be most useful to consumers?

Competition Issues Related to Algorithms, Artificial Intelligence, and Predictive Analytics

  1. Does the use of algorithms, artificial intelligence, and predictive analytics currently raise particular antitrust concerns (including, but not limited to, concerns about algorithmic collusion)?
  2. What antitrust concerns could arise in the future with respect to these technologies?
  3. Is the current antitrust framework for analyzing mergers and conduct sufficient to address any competition issues that are associated with the use of these technologies? If not, why not, and how should the current legal framework be modified?
  4. To what degree do any antitrust concerns raised by these technologies depend on the industry or type of use?

Other Policy Questions

  1. How are these technologies affecting competition, innovation, and consumer choices in the industries and business sectors in which they are used today? How might they do so in the future?
  2. How quickly are these technologies advancing? What are the implications of that pace of technological development from a policy perspective?
  3. How can regulators meet legitimate regulatory goals that may be raised in connection with these technologies without unduly hindering competition or innovation?
  4. Are there tensions between consumer protection and competition policy with respect to these technologies? If so, what are they, and how should they be addressed?
  5. What responsibility does a company utilizing these technologies bear for consumer injury arising from its use of these technologies? Can current laws and regulations address such injuries? Why or why not?

Comments can be submitted online and should be submitted no later than February 15, 2019. If any entity has provided funding for research, analysis, or commentary that is included in a submitted public comment, such funding and its source should be identified on the first page of the comment.

Policy in the News

The Computing Community Consortium (CCC) announced a new initiative to create a Roadmap for Artificial Intelligence. SIGAI’s Yolanda Gil (University of Southern California and President-Elect of AAAI) will work with Bart Selman (Cornell University) to lead the effort. The initiative will support the U.S. Administrations’ efforts in this area and involve academic and industrial researchers to help map a course for needed research in AI. They will hold a series of workshops in 2018 and 2019 to produce the Roadmap by Spring of 2019. The Computing Research Association (CRA) has been involved in shaping public policy of relevance to computing research for more than two decades https://cra.org/govaffairs/blog/ The CRA Government Affairs program has enhanced its efforts to help the members of the computing research community contribute to the public debate knowledgeably and effectively.

Ed Felten, Princeton Professor of Computer Science and Public Affairs, has been confirmed by the U.S. Senate to be a member of the U.S. Privacy and Civil Liberties Oversight Board, a bipartisan agency within the executive branch. He will serve as a part-time member of the board while continuing his teaching and research at Princeton. The five-person board is charged with evaluating and advising on executive branch anti-terrorism measures with respect to privacy and civil liberties. “It is a very important issue,” Felten said. “Federal agencies, in the course of doing national security work, have access to a lot of data about people and they do intercept data. It’s important to make sure they are doing those things in the way they should and not overstepping.” Felten added that the board has the authority to review programs that require secrecy. “The public has limited visibility into some of these programs,” Felten said. “The board’s job is to look out for the public interest.”

On October 24, 2018, the National Academies of Sciences, Engineering, and Medicine Forum on Aging, Disability, and Independence will host a workshop in Washington, DC that will explore the potential of artificial intelligence (AI) to foster a balance of safety and autonomy for older adults and people with disabilities who strive to live as independently as possible http://nationalacademies.org/hmd/Activities/Aging/AgingDisabilityForum/2018-OCT-24.aspx

According to Reuters, Amazon scrapped an AI recruiting tool that showed bias against women in automated employment screening.

ACM Code of Ethics and USACM’s New Name

ACM Code of Ethics
Please note the message from ACM Headquarters and check the link below: “On Tuesday, July 17, ACM plans to announce the updated Code of Ethics and Professional Conduct. We would like your support in helping to reach as broad an audience of computing professionals as possible with this news. When the updated Code goes live at 10 a.m. EDT on July 17, it will be hosted at https://www.acm.org/code-of-ethics.
We encourage you to share the updated Code with your friends and colleagues at that time. If you use social media, please take part in the conversation around computing ethics using the hashtags #ACMCodeOfEthics and #IReadTheCode. And if you are not doing so already, please follow the @TheOfficialACM and @ACM_Ethics Twitter handles to share and engage with posts about the Code.  ACM also plans to host a Reddit AMA and Twitter chats on computing ethics in the weeks following this announcement. We will reach out to you again regarding these events when their details have been solidified.
Thank you in advance for helping to support and increase awareness of the ACM Code of Ethics and for promoting ethical conduct among computing professionals around the world.”

News From the ACM US Technology Policy Committee
The USACM has a new name. Please note the change and remember that SIGAI will continue to have a close relationship with the ACM US Technology Policy Committee. Here is a reminder of the purpose and goals: “The ACM US Technology Policy Committee is a leading independent and nonpartisan voice in addressing US public policy issues related to computing and information technology. The Committee regularly educates and informs Congress, the Administration, and the courts about significant developments in the computing field and how those developments affect public policy in the United States. The Committee provides guidance and expertise in varied areas, including algorithmic accountability, artificial intelligence, big data and analytics, privacy, security, accessibility, digital governance, intellectual property, voting systems, and tech law. As the internet is global, the ACM US Technology Policy Committee works with the other ACM policy entities on publications and projects related to cross-border issues, such as cybersecurity, encryption, cloud computing, the Internet of Things, and internet governance.”

The ACM US Technology Policy Committee’s New Leadership
ACM has named Prof. Jim Hendler as the new Chair of the ACM U.S. Technology Policy Committee (formerly USACM) under the new ACM Technology Policy Council. In addition to being a distinguished computer science professor at RPI, Jim has long been an active USACM member and has served as both a committee chair and as an at-large representative. This is a great choice to guide USACM into the future within ACM’s new technology policy structure. Please add individually to the SIGAI Public Policy congratulations to Jim. Our congratulations and appreciation go to outgoing Chair Stuart Shapiro for his outstanding leadership of USACM.

News from ACM SIGAI

We welcome ACM SIGAI China and its members to ACM SIGAI! ACM SIGAI China held its first event, the ACM SIGAI China Symposium on New Challenges and Opportunities in the Post-Turing AI Era, as part of the ACM Turing 50th Celebration Conference on May 12-14, 2017 in Shanghai. We will report details in an upcoming edition of AI Matters.

The winner of the ACM Prize in Computing is Alexei Efros from the University of California at Berkeley for his work on machine learning in computer vision and computer graphics. The award will be presented at the annual ACM Awards Banquet on June 24, 2017 in San Francisco.

We hope that you enjoyed the ACM Learning Webinar with Tom Mitchell on June 15, 2017 on “Using Machine Learning to Study Neural Representations of Language Meaning”. If you missed it, it is now available on “On Demand.”

The “50 Years of the ACM Turing Award” Celebration will be held on June 23 and 24, 2017 in San Francisco. The ACM SIGAI recipients of the ACM Turing Scholarship to attend this high-profile meeting are Tim Lee from Carnegie Mellon University and Justin Svegliato from the University of Massachusetts at Amherst.

ACM SIGAI now has a 3-month membership requirement before students who join ACM SIGAI can apply for financial benefits from ACM SIGAI, such as fellowships and travel support. Please help us with letting all students know about this new requirement to avoid any disappointments.