Policy and AI Ethics

The Alan Turing Institute Public Policy Programme

Among the complexities of public policy making, the new world of AI and data science requires careful consideration of ethics and safety in addressing complex and far-reaching challenges in the public domain. Data and AI systems lead to opportunities that can produce both good and bad outcomes. Ethical and safe systems require intentional processes and designs for organizations responsible for providing public services and creating public policies. An increasing amount of research focuses on developing comprehensive guidelines and techniques for industry and government groups to make sure they consider the range of issues in AI ethics and safety in their work. An excellent example is the Public Policy Programme at The Alan Turing Institute under the direction of Dr. David Leslie [1]. Their work complements and supplements the Data Ethics Framework [2], which is a practical tool for use in any project initiation phase. Data Ethics and AI Ethics regularly overlap.

The Public Policy Programme describes AI Ethics as “a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies. These values, principles, and techniques are intended both to motivate morally acceptable practices and to prescribe the basic duties and obligations necessary to produce ethical, fair, and safe AI applications. The field of AI ethics has largely emerged as a response to the range of individual and societal harms that the misuse, abuse, poor design, or negative unintended consequences of AI systems may cause.”

They cite the following as some of the most consequential potential harms:

  • Bias and Discrimination
  • Denial of Individual Autonomy, Recourse, and Rights
  • Non-transparent, Unexplainable, or Unjustifiable Outcomes
  • Invasions of Privacy
  • Isolation and Disintegration of Social Connection
  • Unreliable, Unsafe, or Poor-Quality Outcomes

The Ethical Platform for the Responsible Delivery of an AI Project, strives to enable the “ethical design and deployment of AI systems using a multidisciplinary team effort. It demands the active cooperation of all team members both in maintaining a deeply ingrained culture of responsibility and in executing a governance architecture that adopts ethically sound practices at every point in the innovation and implementation lifecycle.” The goal is to “unite an in-built culture of responsible innovation with a governance architecture that brings the values and principles of ethical, fair, and safe AI to life.”

[1] Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529

[2] Data Ethics Framework (2018). https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework.

Principled Artificial Intelligence

In January, 2020, the Berkman Klein Center released a report by Jessica Fjeld and Adam Nagy “Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI”, which summarizes contents of 36 documents on AI principles.

This work acknowledges the surge in frameworks based on ethical and human rights to guide the development and use of AI technologies.  The authors focus on understanding ethics efforts in terms of eight key thematic trends:  

  • Privacy
  • Accountability
  • Safety & security
  • Transparency & explainability
  • Fairness & non-discrimination
  • Human control of technology
  • Professional responsibility
  • Promotion of human values

They report “our analysis examined the forty-seven individual principles that make up the themes, detailing notable similarities and differences in interpretation found across the documents. In sharing these observations, it is our hope that policymakers, advocates, scholars, and others working to maximize the benefits and minimize the harms of AI will be better positioned to build on existing efforts and to push the fractured, global conversation on the future of AI toward consensus.”

Human-Centered AI

Prof. Ben Shneiderman recently presented his extensive work “Human-Centered AI: Trusted, Reliable & Safe” at the University of Arizona’s NSF Workshop on “Assured Autonomy”.  His research emphasizes human autonomy as opposed to the popular notion of autonomous machines. His Open Access paper quickly drew 3200+ downloads. The ideas are now available in the International Journal of Human–Computer Interaction. The abstract is as follows: “Well-designed technologies that offer high levels of human control and high levels of computer automation can increase human performance, leading to wider adoption. The Human-Centered Artificial Intelligence (HCAI) framework clarifies how to (1) design for high levels of human control and high levels of computer automation so as to increase human performance, (2) understand the situations in which full human control or full computer control are necessary, and (3) avoid the dangers of excessive human control or excessive computer control. The methods of HCAI are more likely to produce designs that are Reliable, Safe & Trustworthy (RST). Achieving these goals will dramatically increase human performance, while supporting human self-efficacy, mastery, creativity, and responsibility.”

COVID AI

AI is in the news and in policy discussions regarding COVID-19, both about ways to help fight the pandemic and in terms of ethical issues that policymakers should address. Michael Corkery and David Gelles in the NY Times article “Robots Welcome to Take Over, as Pandemic Accelerates Automation”, suggest that “social-distancing directives, which are likely to continue in some form after the crisis subsides, could prompt more industries to accelerate their use of automation.” An MIT Technology Review article by Genevieve Bell, “We need mass surveillance to fight covid-19—but it doesn’t have to be creepy” looks at the pros and cons of AI technology and if we now have the chance to “reinvent the way we collect and share personal data while protecting individual privacy.”

Public Health and Privacy Issues

Liza Lin and Timothy W. Martin in “How Coronavirus Is Eroding Privacy” write about how technology is being developed to track and monitor individuals for slowing the pandemic, but that this “raises concerns about government overreach.” Here is an excerpt from that WSJ article: “Governments worldwide are using digital surveillance technologies to track the spread of the coronavirus pandemic, raising concerns about the erosion of privacy. Many Asian governments are tracking people through their cellphones to identify those suspected of being infected with COVID-19, without prior consent. European countries are tracking citizens’ movements via telecommunications data that they claim conceals individuals’ identities; American officials are drawing cellphone location data from mobile advertising firms to monitor crowds, but not individuals. The biggest privacy debate concerns involuntary use of smartphones and other digital data to identify everyone with whom the infected had recent contact, then testing and quarantining at-risk individuals to halt the further spread of the disease. Public health officials say surveillance will be necessary in the months ahead, as quarantines are relaxed and the virus remains a threat while a vaccine is developed.

“In South Korea, investigators scan smartphone data to find within 10 minutes people who might have caught the coronavirus from someone they met. Israel has tapped its Shin Bet intelligence unit, usually focused on terrorism, to track down potential coronavirus patients through telecom data. One U.K. police force uses drones to monitor public areas, shaming residents who go out for a stroll.

“The Covid-19 pandemic is ushering in a new era of digital surveillance and rewiring the world’s sensibilities about data privacy. Governments are imposing new digital surveillance tools to track and monitor individuals. Many citizens have welcomed tracking technology intended to bolster defenses against the novel coronavirus. Yet some privacy advocates are wary, concerned that governments might not be inclined to unwind such practices after the health emergency has passed.

“Authorities in Asia, where the virus first emerged, have led the way. Many governments didn’t seek permission from individuals before tracking their cellphones to identify suspected coronavirus patients. South Korea, China and Taiwan, after initial outbreaks, chalked up early successes in flattening infection curves to their use of tracking programs.

“In Europe and the U.S., where privacy laws and expectations are more stringent, governments and companies are taking different approaches. European nations monitor citizen movement by tapping telecommunications data that they say conceals individuals’ identities.

American officials are drawing cellphone location data from mobile advertising firms to track the presence of crowds—but not individuals. Apple Inc. and Alphabet Inc.’s Google recently announced plans to launch a voluntary app that health officials can use to reverse-engineer sickened patients’ recent whereabouts—provided they agree to provide such information.”

Germany Changes Course on Contact Tracing App

Politico reports that “the German government announced today” (4/26) “that Berlin would adopt a ‘decentralized’ approach to a coronavirus contact-tracing app — now backing an approach championed by U.S. tech giants Apple and Google. ‘We will promote the use of a consistently decentralized software architecture for use in Germany,’ the country’s Federal Health Minister Jens Spahn said on Twitter, echoing an interview in the Welt am Sonntag newspaper. Earlier this month, Google and Apple announced they would team up to unlock their smartphones’ Bluetooth capabilities to allow developers to build interoperable contact tracing apps. Germany is now abandoning a centralized approach spearheaded by the German-led Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project. Berlin’s U-turn comes after a group of six organizations on Friday urged Angela Merkel’s government to reassess plans for a smartphone app that traces potential coronavirus infections, warning that it does not do enough to protect user data.”

NSF Program on Fairness in Artificial Intelligence (FAI) in Collaboration with Amazon

A new National Science Foundation solicitation NSF 20-566 has been announced by the Directorate for Computer and Information Science and Engineering, Division of Information and Intelligent Systems, Directorate for Social, Behavioral and Economic Sciences, and Division of Behavioral and Cognitive Sciences.

Bias and Fairness

Today’s post has AI and Policy news updates and our next installment on Bias and Policy: the fairness component.

News Items for February, 2020

  • OECD launched the OECD.AI Observatory, an online platform to shape and share AI policies across the globe. 
  • The White House released the American Artificial Intelligence Initiative:Year One Annual Report and supported the OECD policy.

Bias and Fairness

In terms of decision-making and policy, fairness can be defined as “the absence of any prejudice or favoritism towards an individual or a group based on their inherent or acquired characteristics”.  Six of the most used definitions are equalized odds, equal opportunity, demographic parity, fairness through unawareness or group unaware, treatment equality. 

The concept of equalized odds and equal opportunity is that individuals who qualify for a desirable outcome should have an equal chance of being correctly assigned regardless of an individual’s belonging to a protected or unprotected group (e.g., female/male). The additional concepts “demographic parity” and “group unaware” are illustrated by the Google visualization research team with nice visualizations using an example “simulating loan decisions for different groups”. The focus of equal opportunity is on the outcome of the true positive rate of the group.

On the other hand, the focus of the demographic parity is on the positive rate only. Consider a loan approval process for two groups: group A and group B. For demographic parity, the overall number of approved loans should be equal in both group A and group B regardless of a person belonging to a protected group. Since the focus for demographic parity is on overall loan approval rate, the rate should be equal for both the groups. Some people in group A who would pay back the loan might be disadvantaged compared to the people in group B who might not pay back the loan.  However, the people in group A will not be at a disadvantage in the equal opportunity concept, since this concept focuses on true positive rate. As an example of fairness through unawareness “an algorithm is fair as long as any protected attributes A are not explicitly used in the decision-making process”.

All of the fairness concepts or definitions either fall under individual fairness, subgroup fairness or group fairness. For example, demographic parity, equalized odds, and equal opportunity are the group fairness type; fairness through awareness falls under the individual type where the focus is not on the overall group.

A definition of bias can be in three categories: data, algorithmic, and user interaction feedback loop:
Data — behavioral bias, presentation bias, linking bias, and content production bias;
Algoritmic — historical bias, aggregation bias, temporal bias, and social bias falls
User Interaction — popularity bias, ranking bias, evaluation bias, and emergent bias.

Bias is a large domain with much to explore and take into consideration. Bias and public policy will be further discussed in future blog posts.

This series of posts on Bias has been co-authored by Farhana Faruqe, doctoral student in the GWU Human-Technology Collaboration group.

References 
 [1] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. CoRR, abs/1908.09635, 2019.
[2] Moritz Hardt, Eric Price, , and Nati Srebro. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.). Curran Associates, Inc., 3315–3323. http://papers.nips.cc/paper/ 6374-equality-of-opportunity-in-supervised-learning.pdf
[3] Martin Wattenberg, Fernanda Viegas, and Moritz Hardt. Attacking discrimination with smarter machine learning. Accessed at https://research.google.com/bigpicture/attacking-discrimination-in-ml/, 2016

Discrimination and Bias

Our current public policy posts, focused on ethics and bias in current and emerging areas of AI, build on the work “A Survey on Bias and Fairness in Machine Learning” by Ninareh Mehrabi, et al. and resources provided by Barocas, et al. The guest co-author of this series of blog posts on AI and bias is Farhana Faruqe, doctoral student in the George Washington University Human-Technology Collaboration program. We look forward to your comments and suggestions.

Discrimination, unfairness, and bias are terms used frequently these days in the context of AI and data science applications that make decisions in the everyday lives of individuals and groups. Machine learning applications depend on datasets that are usually a reflection of our real world in which individuals have intentional and unintentional biases that may cause discrimination and unfair actions. Broadly, fairness is the absence of any prejudice or favoritism towards an individual or a group based on their intrinsic or acquired traits in the context of decision-making.

Today’s blog post focuses on discrimination, which Ninareh Mehrabi, et al. describe as follows:

Direct Discrimination: “Direct discrimination happens when protected attributes of individuals explicitly result in non-favorable outcomes toward them.”  Some traits like race, color, national origin, religion, sex, family status, disability, marital status, recipient of public assistance, and age are identified as sensitive attributes or protected attributes in the machine learning world.  It is not legal to discriminate against these sensitive attributes, which are listed by the FHA and Equal Credit Opportunity Act (ECOA).                

Indirect Discrimination: Even if sensitive or protected attributes are not used against an individual, still indirect discrimination can happen. For example, residential zip code is not categorized as a protected attribute, but from the zip code one may find out about race which is a protected attribute. So, “protected groups or individuals still can get treated unjustly as a result of implicit effects from their protected attributes.”

Systemic Discrimination. In the nursing profession, the custom is to expect a nurse to be a woman. So, excluding qualified male nurses for nursing position is an example of systematic discrimination. Systematic discrimination is defined as “policies, customs, or behaviors that are a part of the culture or structure of an organization that may perpetuate discrimination against certain subgroups of the population”.                                                                                                                              
Statistical Discrimination: In law enforcement, racial profiling is an example of statistical discrimination. In this case, minority drivers are pulled over more often than white drivers. The authors define “statistical discrimination is a phenomenon where decision-makers use average group statistics to judge an individual belonging to that group.”

Explainable Discrimination: In some cases, “discrimination can be explained using attributes” like working hours and education, which is legal and acceptable as well. In a widely used dataset in the fairness domain, males on average have a higher annual income than females because on average females work fewer hours per week than males do. Decisions made without considering working hours could lead to discrimination.                     

Unexplainable Discrimination: This type of discrimination is not legal as explainable discrimination because “the discrimination toward a group is unjustified”. Some researchers have introduced techniques during data preprocessing and training to remove unexplainable discrimination.   

To understand bias in techniques such as machine learning, we will discuss in our next blog post another important aspect: fairness.

Bias, Ethics, and Policy

We are planning a series of posts on Bias, starting with the background and context of bias in general and then focusing on specific instances of bias in current and emerging areas of AI. Ultimately, this information is intended to inform ideas on public policy. We look forward to your comments and suggestions for a robust discussion.

Extensive work “A Survey on Bias and Fairness in Machine Learning” by Ninareh Mehrabi et al. will be useful for the conversation. The guest co-author of the ACM SIGAI Public Policy blog posts on Bias will be Farhana Faruqe, doctoral student in the George Washington University Human-Technology Collaboration program.

A related announcement is about the new section on AI and Ethics in the Springer Nature Computer Science journal. “The AI & Ethics section focuses on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future. It seeks to promote informed debate and discussion of the current and future developments in AI, and the ethical, moral, regulatory, and policy implications that arise from these developments.” As a Co-Editor of the new section, I welcome you to submit a manuscript and contact me with any questions and suggestions.

AI Revolution or Evolution

An interesting IEEE Spectrum article “AI and Economic Productivity: Expect Evolution, Not Revolution” by Jeffrey Funk questions popular claims about the rapid pace of AI’s impact on productivity and the economy. He asserts that “Despite the hype, artificial intelligence will take years to significantly boost economic productivity”. If correct, this will have serious implications for public policymaking who have chosen due be proactive. The article raises good points, but many of the examples do not look like real AI, at least as a dominant component. Putting “smart” in the name of a product doesn’t make it AI, and automation doesn’t necessarily use AI. 

On a broader note, we should care about the technology language we use and be aware of the usual practices in commercialization. As discussed in previous blog posts, expanding too far the meanings of terms like AI, machine learning, and algorithms makes rational discourse more difficult. Some of us remember marketing of expert systems and relational databases: companies do a disservice to society by claiming each breakthrough technology actually is in their products. Here we go again — today about anything counts as AI depending on the point you want to make and the products you want to sell. 

Another issue raised by the article is from the emphasis on startups as the leaders of economic impact, as opposed to the results of innovations from established industry and government labs. Technologies have adoption curves, going from early adopters through the laggards, of about seven years. If you add to that the difficulties of making a startup succeed, a decade or so is probably the minimum timescale for large impact on the economy. A better perspective on revolution versus evolution could come from longitudinal evaluations looking at trends. In that case, a good endpoint for a hypothesis about dramatic impact on productivity might be the 2030-2035 timeframe. 

A problem with using a vague or broad notion of AI is that policymakers could miss the revolutionary impact of data science, which can, but may not, involve real AI. Data science probably has the best chance of dramatically impacting society and the economy in the short and long terms and has the advantage of not having to involve designing and manufacturing physical objects, and thus not always having to wait for consumers to adopt new products. Data Science is already affecting society and employment with obvious, and not so obvious, revolutionary impacts on our lives.

PCAST and AI Plan

Executive Order on The President’s Council of Advisors on Science and Technology (PCAST)

President Trump issued an executive order on October 22 re-establishing the President’s Council of Advisors on Science and Technology (PCAST), an advisory body that consists of science and technology leaders from the private and academic sectors. PCAST is to be chaired by Kelvin Droegemeier, director of the Office of Science and Technology Policy, and Edward McGinnis, formerly with DOE, is to serve as the executive director. The majority of the 16 members are from key industry sectors. The executive order says that the council is expected to address “strengthening American leadership in science and technology, building the Workforce of the Future, and supporting foundational research and development across the country.” For more information, see the Inside Education article about the first appointments.

Schumer AI Plan

Jeffrey Mervis has a November 11, 2019, article in AAAS News from Science on a recommendation for the government to create a new agency funded with $100 billion over 5 years for basic AI research. “Senator Charles Schumer (D–NY) says the initiative would enable the United States to keep pace with China and Russia in a critical research arena and plug gaps in what U.S. companies are unwilling to finance.”

Schumer gave his ideas publicly in a speech in early November to senior national security and research policymakers following a recent presidential executive order. He wants to create a new national science tech fund looking into “fundamental research related to AI and some other cutting-edge areas” such as quantum computing, 5G networks, robotics, cybersecurity, and biotechnology. Funds would encourage research at U.S. universities, companies, and other federal agencies and support incubators for moving research into commercial products. An additional article can be found in Defense News.

Work Transition

AI and other automation technologies have great promise for benefitting society and enhancing productivity, but appropriate policies by companies and governments are needed to help manage workforce transitions and make them as smooth as possible. The McKinsey Global Institute report AI, automation, and the future of work: Ten things to solve for states that “There is work for everyone today and there will be work for everyone tomorrow, even in a future with automation. Yet that work will be different, requiring new skills, and a far greater adaptability of the workforce than we have seen. Training and retraining both mid-career workers and new generations for the coming challenges will be an imperative. Government, private-sector leaders, and innovators all need to work together to better coordinate public and private initiatives, including creating the right incentives to invest more in human capital. The future with automation and AI will be challenging, but a much richer one if we harness the technologies with aplomb—and mitigate the negative effects.” They list likely actionable and scalable solutions in several key areas:

1. Ensuring robust economic and productivity growth

2. Fostering business dynamism

3. Evolving education systems and learning for a changed workplace

4. Investing in human capital

5. Improving labor-market dynamism

6. Redesigning work

7. Rethinking incomes

8. Rethinking transition support and safety nets for workers affected

9. Investing in drivers of demand for work

10. Embracing AI and automation safely

In redesigning work and rethinking incomes, we have the chance for bold ideas that improve the lives of workers and give them more interesting jobs that could provide meaning, purpose, and dignity.

Some of the categories of new jobs that could replace old jobs are
1. Making, designing, and coding in AI, data science, and engineering occupations
2. Working in new types of non-AI jobs that are enhanced by AI, making unpleasant old jobs more palatable or providing new jobs that are more interesting; the gig economy and crowdsourcing ideas are examples that could provide creative employment options
3. Providing living wages for people to do things they love; for example, in the arts through dramatic funding increases for NEA and NEH programs. Grants to individual artists and musicians, professional and amateur musical organizations, and informal arts education initiatives could enrich communities while providing income for millions of people. Policies to implement this idea could be one piece of the future-of-work puzzle and be much more preferable for the economy and society than allowing large-scale unemployment due to loss of work from automation.

National AI Strategy

The National Artificial Intelligence Research and Development Strategic Plan – an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council – was released in June, 2019, and the President’s, Executive Order 13859 Maintaining American Leadership in Artificial Intelligence was released on February 11. The Computing Community Consortium (CCC) recently released the AI Roadmap Website, and an interesting industry response is “Intel Gets Specific on a National Strategy for AI, “How to Propel the US into a Sustainable Leadership Position on the Global Artificial Intelligence Stage” By Naveen Rao and David Hoffman. Excerpts follow and the accompanying links provide the details:

“AI is more than a matter of making good technology; it is also a matter of making good policy. And that’s what a robust national AI strategy will do: continue to unlock the potential of AI, prepare for AI’s many ramifications, and keep the U.S. among leading AI countries. At least 20 other countries have published, and often funded, their national AI strategies. Last month, the administration signaled its commitment to U.S. leadership in AI by issuing an executive order to launch the American AI Initiative, focusing federal government resources to develop AI. Now it’s time to take the next step and bring industry and government together to develop a fully realized U.S. national strategy to continue leading AI innovation.

“… But to sustain leadership and effectively manage the broad social implications of AI, the U.S. needs coordination across government, academia, industry and civil society. This challenge is too big for silos, and it requires that technologists and policymakers work together and understand each other’s worlds.” Their call to action was released in May 2018.

Four Key Pillars

“Our recommendation for a national AI strategy lays out four key responsibilities for government. Within each of these areas we propose actionable steps. We provide some highlights here, and we encourage you to read the full white paper or scan the shorter fact sheet.

Sustainable and funded government AI research and development can help to advance the capabilities of AI in areas such as healthcare, cybersecurity, national security and education, but there need to be clear ethical guidelines.

Create new employment opportunities and protect people’s welfare given that AI has the potential to automate certain work activities.

Liberate and share data responsibly, as the more data that is available, the more “intelligent” an AI system can become. But we need guardrails.

Remove barriers and create a legal and policy environment that supports AI so that the responsible development and use of AI is not inadvertently derailed.”