FTC Hearing on AI and Algorithms

FTC Hearing on AI and Algorithms: November 13 and 14 in Washington, DC

From the FTC:  The hearing will examine competition and consumer protection issues associated with the use of algorithms, artificial intelligence, and predictive analytics in business decisions and conduct. See detailed agenda. The record of that proceeding will be open until mid-February. To further its consideration of these issues, the agency seeks public comment on the questions, and it welcomes input on other related topics not specifically listed in the 25 questions.

Please send your thoughts to lrm@gwu.edu on what SIGAI might submit in response to the 25 specific questions posed by the Commission. See below. The hearing will inform the FTC, other policymakers, and the public of
* the current and potential uses of these technologies;
* the ethical and consumer protection issues that are associated with the use of these technologies;
* how the competitive dynamics of firm and industry conduct are affected by the use of these technologies; and
* policy, innovation, and market considerations associated with the use of these technologies.

25 specific questions posed by the FTC

Background on Algorithms, Artificial Intelligence, and Predictive Analytics, and Applications of the Technologies

  1. What features distinguish products or services that use algorithms, artificial intelligence, or predictive analytics? In which industries or business sectors are they most prevalent?
  2. What factors have facilitated the development or advancement of these technologies? What types of resources were involved (e.g., human capital, financial, other)?
  3. Are there factors that have impeded the development of these technologies? Are there factors that could impede further development of these technologies?
  4. What are the advantages and disadvantages for consumers and for businesses of utilizing products or services facilitated by algorithms, artificial intelligence, or predictive analytics?
  5. From a technical perspective, is it sometimes impossible to ascertain the basis for a result produced by these technologies? If so, what concerns does this raise?
  6. What are the advantages and disadvantages of developing technologies for which the basis for the results can or cannot be determined? What criteria should determine when a “black box” system is acceptable, or when a result should be explainable?

Common Principles and Ethics in the Development and Use of Algorithms, Artificial Intelligence, and Predictive Analytics

  1. What are the main ethical issues (e.g., susceptibility to bias) associated with these technologies? How are the relevant affected parties (e.g., technologists, the business community, government, consumer groups, etc.) proposing to address these ethical issues? What challenges might arise in addressing them?
  2. Are there ethical concerns raised by these technologies that are not also raised by traditional computer programming techniques or by human decision-making? Are the concerns raised by these technologies greater or less than those of traditional computer programming or human decision-making? Why or why not?
  3. Is industry self-regulation and government enforcement of existing laws sufficient to address concerns, or are new laws or regulations necessary?
  4. Should ethical guidelines and common principles be tailored to the type of technology involved, or should the goal be to develop one overarching set of best practices?

Consumer Protection Issues Related to Algorithms, Artificial Intelligence, and Predictive Analytics

  1. What are the main consumer protection issues raised by algorithms, artificial intelligence, and predictive analytics?
  2. How well do the FTC’s current enforcement tools, including the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act, address issues raised by these technologies?
  3. In recent years, the FTC has held public forums to examine the consumer protection questions raised by artificial intelligence as used in certain contexts (e.g., the 2017 FinTech Forum on artificial intelligence and blockchain and the 2011 Face Facts Forum on facial recognition technology). Since those events, have technological advancements, or the increased prevalence of certain technologies, raised new or increased consumer protection concerns?
  4. What roles should explainability, risk management, and human control play in the implementation of these technologies?
  5. What choices and notice should consumers have regarding the use of these technologies?
  6. What educational role should the FTC play with respect to these technologies? What would be most useful to consumers?

Competition Issues Related to Algorithms, Artificial Intelligence, and Predictive Analytics

  1. Does the use of algorithms, artificial intelligence, and predictive analytics currently raise particular antitrust concerns (including, but not limited to, concerns about algorithmic collusion)?
  2. What antitrust concerns could arise in the future with respect to these technologies?
  3. Is the current antitrust framework for analyzing mergers and conduct sufficient to address any competition issues that are associated with the use of these technologies? If not, why not, and how should the current legal framework be modified?
  4. To what degree do any antitrust concerns raised by these technologies depend on the industry or type of use?

Other Policy Questions

  1. How are these technologies affecting competition, innovation, and consumer choices in the industries and business sectors in which they are used today? How might they do so in the future?
  2. How quickly are these technologies advancing? What are the implications of that pace of technological development from a policy perspective?
  3. How can regulators meet legitimate regulatory goals that may be raised in connection with these technologies without unduly hindering competition or innovation?
  4. Are there tensions between consumer protection and competition policy with respect to these technologies? If so, what are they, and how should they be addressed?
  5. What responsibility does a company utilizing these technologies bear for consumer injury arising from its use of these technologies? Can current laws and regulations address such injuries? Why or why not?

Comments can be submitted online and should be submitted no later than February 15, 2019. If any entity has provided funding for research, analysis, or commentary that is included in a submitted public comment, such funding and its source should be identified on the first page of the comment.

Policy in the News

The Computing Community Consortium (CCC) announced a new initiative to create a Roadmap for Artificial Intelligence. SIGAI’s Yolanda Gil (University of Southern California and President-Elect of AAAI) will work with Bart Selman (Cornell University) to lead the effort. The initiative will support the U.S. Administrations’ efforts in this area and involve academic and industrial researchers to help map a course for needed research in AI. They will hold a series of workshops in 2018 and 2019 to produce the Roadmap by Spring of 2019. The Computing Research Association (CRA) has been involved in shaping public policy of relevance to computing research for more than two decades https://cra.org/govaffairs/blog/ The CRA Government Affairs program has enhanced its efforts to help the members of the computing research community contribute to the public debate knowledgeably and effectively.

Ed Felten, Princeton Professor of Computer Science and Public Affairs, has been confirmed by the U.S. Senate to be a member of the U.S. Privacy and Civil Liberties Oversight Board, a bipartisan agency within the executive branch. He will serve as a part-time member of the board while continuing his teaching and research at Princeton. The five-person board is charged with evaluating and advising on executive branch anti-terrorism measures with respect to privacy and civil liberties. “It is a very important issue,” Felten said. “Federal agencies, in the course of doing national security work, have access to a lot of data about people and they do intercept data. It’s important to make sure they are doing those things in the way they should and not overstepping.” Felten added that the board has the authority to review programs that require secrecy. “The public has limited visibility into some of these programs,” Felten said. “The board’s job is to look out for the public interest.”

On October 24, 2018, the National Academies of Sciences, Engineering, and Medicine Forum on Aging, Disability, and Independence will host a workshop in Washington, DC that will explore the potential of artificial intelligence (AI) to foster a balance of safety and autonomy for older adults and people with disabilities who strive to live as independently as possible http://nationalacademies.org/hmd/Activities/Aging/AgingDisabilityForum/2018-OCT-24.aspx

According to Reuters, Amazon scrapped an AI recruiting tool that showed bias against women in automated employment screening.

ACM Code of Ethics and USACM’s New Name

ACM Code of Ethics
Please note the message from ACM Headquarters and check the link below: “On Tuesday, July 17, ACM plans to announce the updated Code of Ethics and Professional Conduct. We would like your support in helping to reach as broad an audience of computing professionals as possible with this news. When the updated Code goes live at 10 a.m. EDT on July 17, it will be hosted at https://www.acm.org/code-of-ethics.
We encourage you to share the updated Code with your friends and colleagues at that time. If you use social media, please take part in the conversation around computing ethics using the hashtags #ACMCodeOfEthics and #IReadTheCode. And if you are not doing so already, please follow the @TheOfficialACM and @ACM_Ethics Twitter handles to share and engage with posts about the Code.  ACM also plans to host a Reddit AMA and Twitter chats on computing ethics in the weeks following this announcement. We will reach out to you again regarding these events when their details have been solidified.
Thank you in advance for helping to support and increase awareness of the ACM Code of Ethics and for promoting ethical conduct among computing professionals around the world.”

News From the ACM US Technology Policy Committee
The USACM has a new name. Please note the change and remember that SIGAI will continue to have a close relationship with the ACM US Technology Policy Committee. Here is a reminder of the purpose and goals: “The ACM US Technology Policy Committee is a leading independent and nonpartisan voice in addressing US public policy issues related to computing and information technology. The Committee regularly educates and informs Congress, the Administration, and the courts about significant developments in the computing field and how those developments affect public policy in the United States. The Committee provides guidance and expertise in varied areas, including algorithmic accountability, artificial intelligence, big data and analytics, privacy, security, accessibility, digital governance, intellectual property, voting systems, and tech law. As the internet is global, the ACM US Technology Policy Committee works with the other ACM policy entities on publications and projects related to cross-border issues, such as cybersecurity, encryption, cloud computing, the Internet of Things, and internet governance.”

The ACM US Technology Policy Committee’s New Leadership
ACM has named Prof. Jim Hendler as the new Chair of the ACM U.S. Technology Policy Committee (formerly USACM) under the new ACM Technology Policy Council. In addition to being a distinguished computer science professor at RPI, Jim has long been an active USACM member and has served as both a committee chair and as an at-large representative. This is a great choice to guide USACM into the future within ACM’s new technology policy structure. Please add individually to the SIGAI Public Policy congratulations to Jim. Our congratulations and appreciation go to outgoing Chair Stuart Shapiro for his outstanding leadership of USACM.

News from ACM SIGAI

We welcome ACM SIGAI China and its members to ACM SIGAI! ACM SIGAI China held its first event, the ACM SIGAI China Symposium on New Challenges and Opportunities in the Post-Turing AI Era, as part of the ACM Turing 50th Celebration Conference on May 12-14, 2017 in Shanghai. We will report details in an upcoming edition of AI Matters.

The winner of the ACM Prize in Computing is Alexei Efros from the University of California at Berkeley for his work on machine learning in computer vision and computer graphics. The award will be presented at the annual ACM Awards Banquet on June 24, 2017 in San Francisco.

We hope that you enjoyed the ACM Learning Webinar with Tom Mitchell on June 15, 2017 on “Using Machine Learning to Study Neural Representations of Language Meaning”. If you missed it, it is now available on “On Demand.”

The “50 Years of the ACM Turing Award” Celebration will be held on June 23 and 24, 2017 in San Francisco. The ACM SIGAI recipients of the ACM Turing Scholarship to attend this high-profile meeting are Tim Lee from Carnegie Mellon University and Justin Svegliato from the University of Massachusetts at Amherst.

ACM SIGAI now has a 3-month membership requirement before students who join ACM SIGAI can apply for financial benefits from ACM SIGAI, such as fellowships and travel support. Please help us with letting all students know about this new requirement to avoid any disappointments.