Bias, Ethics, and Policy

We are planning a series of posts on Bias, starting with the background and context of bias in general and then focusing on specific instances of bias in current and emerging areas of AI. Ultimately, this information is intended to inform ideas on public policy. We look forward to your comments and suggestions for a robust discussion.

Extensive work “A Survey on Bias and Fairness in Machine Learning” by Ninareh Mehrabi et al. will be useful for the conversation. The guest co-author of the ACM SIGAI Public Policy blog posts on Bias will be Farhana Faruqe, doctoral student in the George Washington University Human-Technology Collaboration program.

A related announcement is about the new section on AI and Ethics in the Springer Nature Computer Science journal. “The AI & Ethics section focuses on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future. It seeks to promote informed debate and discussion of the current and future developments in AI, and the ethical, moral, regulatory, and policy implications that arise from these developments.” As a Co-Editor of the new section, I welcome you to submit a manuscript and contact me with any questions and suggestions.

PCAST and AI Plan

Executive Order on The President’s Council of Advisors on Science and Technology (PCAST)

President Trump issued an executive order on October 22 re-establishing the President’s Council of Advisors on Science and Technology (PCAST), an advisory body that consists of science and technology leaders from the private and academic sectors. PCAST is to be chaired by Kelvin Droegemeier, director of the Office of Science and Technology Policy, and Edward McGinnis, formerly with DOE, is to serve as the executive director. The majority of the 16 members are from key industry sectors. The executive order says that the council is expected to address “strengthening American leadership in science and technology, building the Workforce of the Future, and supporting foundational research and development across the country.” For more information, see the Inside Education article about the first appointments.

Schumer AI Plan

Jeffrey Mervis has a November 11, 2019, article in AAAS News from Science on a recommendation for the government to create a new agency funded with $100 billion over 5 years for basic AI research. “Senator Charles Schumer (D–NY) says the initiative would enable the United States to keep pace with China and Russia in a critical research arena and plug gaps in what U.S. companies are unwilling to finance.”

Schumer gave his ideas publicly in a speech in early November to senior national security and research policymakers following a recent presidential executive order. He wants to create a new national science tech fund looking into “fundamental research related to AI and some other cutting-edge areas” such as quantum computing, 5G networks, robotics, cybersecurity, and biotechnology. Funds would encourage research at U.S. universities, companies, and other federal agencies and support incubators for moving research into commercial products. An additional article can be found in Defense News.

National AI Strategy

The National Artificial Intelligence Research and Development Strategic Plan – an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council – was released in June, 2019, and the President’s, Executive Order 13859 Maintaining American Leadership in Artificial Intelligence was released on February 11. The Computing Community Consortium (CCC) recently released the AI Roadmap Website, and an interesting industry response is “Intel Gets Specific on a National Strategy for AI, “How to Propel the US into a Sustainable Leadership Position on the Global Artificial Intelligence Stage” By Naveen Rao and David Hoffman. Excerpts follow and the accompanying links provide the details:

“AI is more than a matter of making good technology; it is also a matter of making good policy. And that’s what a robust national AI strategy will do: continue to unlock the potential of AI, prepare for AI’s many ramifications, and keep the U.S. among leading AI countries. At least 20 other countries have published, and often funded, their national AI strategies. Last month, the administration signaled its commitment to U.S. leadership in AI by issuing an executive order to launch the American AI Initiative, focusing federal government resources to develop AI. Now it’s time to take the next step and bring industry and government together to develop a fully realized U.S. national strategy to continue leading AI innovation.

“… But to sustain leadership and effectively manage the broad social implications of AI, the U.S. needs coordination across government, academia, industry and civil society. This challenge is too big for silos, and it requires that technologists and policymakers work together and understand each other’s worlds.” Their call to action was released in May 2018.

Four Key Pillars

“Our recommendation for a national AI strategy lays out four key responsibilities for government. Within each of these areas we propose actionable steps. We provide some highlights here, and we encourage you to read the full white paper or scan the shorter fact sheet.

Sustainable and funded government AI research and development can help to advance the capabilities of AI in areas such as healthcare, cybersecurity, national security and education, but there need to be clear ethical guidelines.

Create new employment opportunities and protect people’s welfare given that AI has the potential to automate certain work activities.

Liberate and share data responsibly, as the more data that is available, the more “intelligent” an AI system can become. But we need guardrails.

Remove barriers and create a legal and policy environment that supports AI so that the responsible development and use of AI is not inadvertently derailed.”

AI Race Matters

China, the European Union, and the United States have been in the news about strategic plans and policies on the future of AI. The July 2 AI Matters policy blog post was on the U.S. National Artificial Intelligence Research and Development Strategic Plan, released in June, as an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council. The Computing Community Consortium (CCC) recently released the AI Roadmap Website.
Now, a Center for Data Innovation Report compares the current standings of China, the European Union, and the United States and makes policy recommendations. Here is the report summary: “Many nations are racing to achieve a global innovation advantage in artificial intelligence (AI) because they understand that AI is a foundational technology that can boost competitiveness, increase productivity, protect national security, and help solve societal challenges. This report compares China, the European Union, and the United States in terms of their relative standing in the AI economy by examining six categories of metrics—talent, research, development, adoption, data, and hardware. It finds that despite China’s bold AI initiative, the United States still leads in absolute terms. China comes in second, and the European Union lags further behind. This order could change in coming years as China appears to be making more rapid progress than either the United States or the European Union. Nonetheless, when controlling for the size of the labor force in the three regions, the current U.S. lead becomes even larger, while China drops to third place, behind the European Union. This report also offers a range of policy recommendations to help each nation or region improve its AI capabilities.”

About Face

Face recognition R&D has made great progress in recent years and has been prominent in the news. In public policy many are calling for a reversal of the trajectory for FR systems and products. In the hands of people of good will – using products designed for safety and training systems with appropriate data – society and individuals could have a better life. The Verge reports China’s use of unique facial markings of pandas to identify individual animals. FR research includes work to mitigate negative outcomes, as with the Adobe and UC Berkeley work on Detecting Facial Manipulations in Adobe Photoshop: automatic detect when images of faces have been manipulated by use of splicing, cloning, and removing an object.

Intentional and unintentional application of systems that are not designed and trained for ethical use are a threat to society. Screening for terrorists could be good, but FR lie and fraud detection systems may not work properly. The safety of FR is currently an important issue for policymakers, but regulations could have negative consequences for AI researchers. As with many contemporary issues, conflicts arise because of conflicting policies in different countries.

Recent and current legislation is attempting to restrict FR the use and possibly research.
* San Francisco, CA and Somerville, MA, and Oakland, CA, are the first three cities to limit use of FR to identify people.
* “Facial recognition may be banned from public housing thanks to proposed law” – CNET reports that a bill will be introduced to address the issue that “… landlords across the country continue to install smart home technology and tenants worry about unchecked surveillance, there’s been growing concern about facial recognition arriving at people’s doorsteps.”
* The major social media companies are being pressed on “how they plan to handle the threat of deepfake images and videos on their platforms ahead of the 2020 elections.”
* A call for a more comprehensive ban on FR has been launched by the digital rights group Fight for the Future, seeking a complete Federal ban on government use of facial recognition surveillance.

Beyond legislation against FR research and banning certain products, work is in progress to enable safe and ethical use of FR. A more general example that could be applied to FR is the MITRE work The Ethical Framework for the Use of Consumer-Generated Data in Health Care, which “establishes ethical values, principles, and guidelines to guide the use of Consumer-Generated Data for health care purposes.”

Events and Announcements

AAAI Policy Initiative

AAAI has established a new mailing list on US Policy that will focus exclusively on the discussion of US policy matters related to artificial intelligence. All members and affiliates are invited to join the list at https://aaai.org/Organization/mailing-lists.php

Participants will have the opportunity to subscribe or unsubscribe at any time. The mailing list will be moderated, and all posts will be approved before dissemination. This is a great opportunity for another productive partnership between AAAI and SIGAI policy work.

EPIC Panel on June 5th

A panel on AI, Human Rights, and US policy, will be hosted by the Electronic Privacy Information Center (EPIC) at their annual meeting (and celebration of 25th anniversary) on June 5, 2019, at the National Press Club in DC.  Our Lorraine Kisselburgh will join Harry Lewis (Harvard), Sherry Turkle (MIT), Lynne Parker (UTenn and White House OSTP director for AI), Sarah Box (OECD), and Bilyana Petkova (EPIC and Maastricht) to discuss AI policy directions for the US.  The event is free and open to the public. You can register at https://epic.org/events/June5AIpanel/

2019 ACM SIGAI Election Reminder

Please remember to vote and to review the information on http://www.acm.org/elections/sigs/voting-page. Please note that 16:00 UTC, 14 June 2019 is the deadline for submitting your vote. To access the secure voting site, you will enter your email address (the one associated with your ACM/SIG member record) to reach the menu of active SIG elections for which you are eligible. In the online menu, select your Special Interest Group and enter the 10-digit Unique Pin.

AI Research Roadmap

The Computing Community Consortium (CCC) is requesting comments on the draft of A 20-Year Community Roadmap for AI Research in the US. Please submit your comments here by May 28, 2019. See the AI Roadmap Website for more information. 

Here is a link to the whole report and links to individual sections:

     Title Page, Executive Summary, and Table of Contents 

  1. Introduction
  2. Major Societal Drivers for Future Artificial Intelligence Research 
  3. Overview of Core Technical Areas of AI Research Roadmap: Workshop Reports 
    1. Workshop I: A Research Roadmap for Integrated Intelligence 
    2. Workshop II: A Research Roadmap for Meaningful Interaction 
    3. Workshop III: A Research Roadmap for Self-Aware Learning 
  4. Major Findings 
  5. Recommendations
  6. Conclusions

Appendices (participants and contributors)

AI Hype Not

A recent item in Science|Business “Artificial intelligence nowhere near the real thing, says German AI chief”, by Éanna Kelly, gives policy-worthy warnings and ideas. “In his 20 years as head of Germany’s biggest AI research lab Wolfgang Wahlster has seen the tech hype machine splutter three times. As he hands over to a new CEO, he warns colleagues: ‘Don’t over-promise’.the computer scientist who has just ended a 20 year stint as CEO of the German Research Centre for Artificial Intelligence says that [warning] greatly underestimates the distance between AI and its human counterpart: ‘We’re years away from a game changer in the field. I always warn people, one should be a bit careful with what they claim. Every day you work on AI, you see the big gap between human intelligence and AI’, Wahlster told Science|Business.”

For AI policy, we should remember to look out for over promising, but we also need to be mindful of the time frame for making effective policy and be fully engaged now. Our effort importantly informs policymakers about the real opportunities to make AI successful.  A recent article in The Conversation by Ben Shneiderman “What alchemy and astrology can teach artificial intelligence researchers,” gives insightful information and advice on how to avoid being distracted away “… from where the real progress is already happening: in systems that enhance – rather than replace – human capabilities.” Shneiderman recommends that technology designers shift “from trying to replace or simulate human behavior in machines to building wildly successful applications that people love to use.”

AAII and EAAI

President Trump issued an Executive Order on February 11, 2019, entitled “Maintaining American Leadership In Artificial Intelligence”. The full text is at https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/The American AI Initiative of course needs analysis and implementation details. Two sections of the Executive Order give hope for opportunities to provide public input:

Sec (5)(a)(1)(i): Within 90 days of the date of this order, the OMB Director shall publish a notice in the Federal Register inviting the public to identify additional requests for access or quality improvements for Federal data and models that would improve AI R&D and testing. …[T]hese actions by OMB will help to identify datasets that will facilitate non-Federal AI R&D and testing.
and
Sec (6)(b)
(b)  To help ensure public trust in the development and implementation of AI applications, OMB shall issue a draft version of the memorandum for public comment before it is finalized.
Please stay tuned for ways that our ACM US Technology Policy Committee (USTPC) can help us provide our feedback on the implementation of the Executive Order.

A summary and analysis report is available from the Center for Data Innovation: Executive Order Will Help Ensure U.S. Leadership in AI. They comment that the administration “needs to do more than reprogram existing funds for AI research, skill development, and infrastructure development” and “should ask Congress for significant funding increases to (a) expand these research efforts;
(b) implement light-touch regulation for AI;
(c) resist calls to implement roadblocks or speed bumps for this technology, including export restrictions;
(d) rapidly expand adoption of AI within government,
implement comprehensive reforms to the nation’s workforce training and adjustment policies.”

The latter point was a topic in my invited talk at EAAI-19. Opportunities and innovation in education and training for the workforce of the future rely crucially on public policymaking about workers in the era of increasing use of AI and other automation technologies. An important issue is who will provide training that is timely (by 2030), practical, and affordable for workers who are impacted by job disruptions and transitioning to the new predicted post-automation jobs. The stakeholders along with workers are the schools, employers, unions, community groups, and others. Even if more jobs are created than lost, work in the AI future is disproportionately available in the range of people in the current and near-future workforce.

Section 1 of the Executive Order “Maintaining American Leadership In Artificial Intelligence” follows:
Section 1.  Policy and Principles.Artificial Intelligence (AI) promises to drive growth of the United States economy, enhance our economic and national security, and improve our quality of life. The United States is the world leader in AI research and development (R&D) and deployment.  Continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with our Nation’s values, policies, and priorities.  The Federal Government plays an important role in facilitating AI R&D, promoting the trust of the American people in the development and deployment of AI-related technologies, training a workforce capable of using AI in their occupations, and protecting the American AI technology base from attempted acquisition by strategic competitors and adversarial nations.  Maintaining American leadership in AI requires a concerted effort to promote advancements in technology and innovation, while protecting American technology, economic and national security, civil liberties, privacy, and American values and enhancing international and industry collaboration with foreign partners and allies.  It is the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy, the American AI Initiative (Initiative), guided by five principles:
(a)  The United States must drive technological breakthroughs in AI across the Federal Government, industry, and academia in order to promote scientific discovery, economic competitiveness, and national security.
(b)  The United States must drive development of appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by today’s industries.
(c)  The United States must train current and future generations of American workers with the skills to develop and apply AI technologies to prepare them for today’s economy and jobs of the future.
(d)  The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.
(e)  The United States must promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting our technological advantage in AI and protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.

FTC Hearing on AI and Algorithms

FTC Hearing on AI and Algorithms: November 13 and 14 in Washington, DC

From the FTC:  The hearing will examine competition and consumer protection issues associated with the use of algorithms, artificial intelligence, and predictive analytics in business decisions and conduct. See detailed agenda. The record of that proceeding will be open until mid-February. To further its consideration of these issues, the agency seeks public comment on the questions, and it welcomes input on other related topics not specifically listed in the 25 questions.

Please send your thoughts to lrm@gwu.edu on what SIGAI might submit in response to the 25 specific questions posed by the Commission. See below. The hearing will inform the FTC, other policymakers, and the public of
* the current and potential uses of these technologies;
* the ethical and consumer protection issues that are associated with the use of these technologies;
* how the competitive dynamics of firm and industry conduct are affected by the use of these technologies; and
* policy, innovation, and market considerations associated with the use of these technologies.

25 specific questions posed by the FTC

Background on Algorithms, Artificial Intelligence, and Predictive Analytics, and Applications of the Technologies

  1. What features distinguish products or services that use algorithms, artificial intelligence, or predictive analytics? In which industries or business sectors are they most prevalent?
  2. What factors have facilitated the development or advancement of these technologies? What types of resources were involved (e.g., human capital, financial, other)?
  3. Are there factors that have impeded the development of these technologies? Are there factors that could impede further development of these technologies?
  4. What are the advantages and disadvantages for consumers and for businesses of utilizing products or services facilitated by algorithms, artificial intelligence, or predictive analytics?
  5. From a technical perspective, is it sometimes impossible to ascertain the basis for a result produced by these technologies? If so, what concerns does this raise?
  6. What are the advantages and disadvantages of developing technologies for which the basis for the results can or cannot be determined? What criteria should determine when a “black box” system is acceptable, or when a result should be explainable?

Common Principles and Ethics in the Development and Use of Algorithms, Artificial Intelligence, and Predictive Analytics

  1. What are the main ethical issues (e.g., susceptibility to bias) associated with these technologies? How are the relevant affected parties (e.g., technologists, the business community, government, consumer groups, etc.) proposing to address these ethical issues? What challenges might arise in addressing them?
  2. Are there ethical concerns raised by these technologies that are not also raised by traditional computer programming techniques or by human decision-making? Are the concerns raised by these technologies greater or less than those of traditional computer programming or human decision-making? Why or why not?
  3. Is industry self-regulation and government enforcement of existing laws sufficient to address concerns, or are new laws or regulations necessary?
  4. Should ethical guidelines and common principles be tailored to the type of technology involved, or should the goal be to develop one overarching set of best practices?

Consumer Protection Issues Related to Algorithms, Artificial Intelligence, and Predictive Analytics

  1. What are the main consumer protection issues raised by algorithms, artificial intelligence, and predictive analytics?
  2. How well do the FTC’s current enforcement tools, including the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act, address issues raised by these technologies?
  3. In recent years, the FTC has held public forums to examine the consumer protection questions raised by artificial intelligence as used in certain contexts (e.g., the 2017 FinTech Forum on artificial intelligence and blockchain and the 2011 Face Facts Forum on facial recognition technology). Since those events, have technological advancements, or the increased prevalence of certain technologies, raised new or increased consumer protection concerns?
  4. What roles should explainability, risk management, and human control play in the implementation of these technologies?
  5. What choices and notice should consumers have regarding the use of these technologies?
  6. What educational role should the FTC play with respect to these technologies? What would be most useful to consumers?

Competition Issues Related to Algorithms, Artificial Intelligence, and Predictive Analytics

  1. Does the use of algorithms, artificial intelligence, and predictive analytics currently raise particular antitrust concerns (including, but not limited to, concerns about algorithmic collusion)?
  2. What antitrust concerns could arise in the future with respect to these technologies?
  3. Is the current antitrust framework for analyzing mergers and conduct sufficient to address any competition issues that are associated with the use of these technologies? If not, why not, and how should the current legal framework be modified?
  4. To what degree do any antitrust concerns raised by these technologies depend on the industry or type of use?

Other Policy Questions

  1. How are these technologies affecting competition, innovation, and consumer choices in the industries and business sectors in which they are used today? How might they do so in the future?
  2. How quickly are these technologies advancing? What are the implications of that pace of technological development from a policy perspective?
  3. How can regulators meet legitimate regulatory goals that may be raised in connection with these technologies without unduly hindering competition or innovation?
  4. Are there tensions between consumer protection and competition policy with respect to these technologies? If so, what are they, and how should they be addressed?
  5. What responsibility does a company utilizing these technologies bear for consumer injury arising from its use of these technologies? Can current laws and regulations address such injuries? Why or why not?

Comments can be submitted online and should be submitted no later than February 15, 2019. If any entity has provided funding for research, analysis, or commentary that is included in a submitted public comment, such funding and its source should be identified on the first page of the comment.

Policy in the News

The Computing Community Consortium (CCC) announced a new initiative to create a Roadmap for Artificial Intelligence. SIGAI’s Yolanda Gil (University of Southern California and President-Elect of AAAI) will work with Bart Selman (Cornell University) to lead the effort. The initiative will support the U.S. Administrations’ efforts in this area and involve academic and industrial researchers to help map a course for needed research in AI. They will hold a series of workshops in 2018 and 2019 to produce the Roadmap by Spring of 2019. The Computing Research Association (CRA) has been involved in shaping public policy of relevance to computing research for more than two decades https://cra.org/govaffairs/blog/ The CRA Government Affairs program has enhanced its efforts to help the members of the computing research community contribute to the public debate knowledgeably and effectively.

Ed Felten, Princeton Professor of Computer Science and Public Affairs, has been confirmed by the U.S. Senate to be a member of the U.S. Privacy and Civil Liberties Oversight Board, a bipartisan agency within the executive branch. He will serve as a part-time member of the board while continuing his teaching and research at Princeton. The five-person board is charged with evaluating and advising on executive branch anti-terrorism measures with respect to privacy and civil liberties. “It is a very important issue,” Felten said. “Federal agencies, in the course of doing national security work, have access to a lot of data about people and they do intercept data. It’s important to make sure they are doing those things in the way they should and not overstepping.” Felten added that the board has the authority to review programs that require secrecy. “The public has limited visibility into some of these programs,” Felten said. “The board’s job is to look out for the public interest.”

On October 24, 2018, the National Academies of Sciences, Engineering, and Medicine Forum on Aging, Disability, and Independence will host a workshop in Washington, DC that will explore the potential of artificial intelligence (AI) to foster a balance of safety and autonomy for older adults and people with disabilities who strive to live as independently as possible http://nationalacademies.org/hmd/Activities/Aging/AgingDisabilityForum/2018-OCT-24.aspx

According to Reuters, Amazon scrapped an AI recruiting tool that showed bias against women in automated employment screening.