AI Policy Nuggets II

What Can Biden Do for Science?

A Science|Business Webcast presented a forum of public and private sector leaders discussing ideas about the need for the president-elect to convene world leaders to re-establish ‘rules of engagement’ on science.

Brookings Webinar on the Future of AI

“On November 17, 2020, the Brookings Institution Center for Technology Innovation hosted a webinar to discuss the future of AI, how it is being deployed, and the policy and legal issues being raised. Speakers explored ways to mitigate possible concerns and how to move forward safely, securely, and in a manner consistent with human values.”

Section 230 Update

Politico reports that “Trump for months has urged Congress to revoke industry legal shield Section 230, while its staunchest critics largely pushed to revamp it instead. But the president’s more drastic call for a total repeal — echoed by Biden for very different reasons — is gaining traction among Republicans in Washington. The NYT reported Thursday that White House chief of staff Mark Meadows has even offered Trump’s support for a must-pass annual defense spending bill if it includes such a repeal.”

The European AI Policy Conference

AI may be the most important digital innovation technology transforming industries around the world.
“Businesses in Europe are at the forefront of some of the latest advancements in the field, and European universities are home to the greatest concentration of AI researchers in the world. Every week, new case studies emerge showing the potential opportunities that can arise from greater use of the technology.” The European AI Policy Conference brings together leading voices in AI from to discuss why European success in AI is important, how the EU compares to other world leaders today, and what steps European policymakers should take to be more competitive in AI. “The European AI Policy Conference is a high-level forum to connect stakeholders working to promote AI in Europe, showcase advances in AI, and promote AI policies supporting its development to EU policymakers and thought leaders.”

Policy Issues from AI and Ethics

The inaugural issue of the new journal AI and Ethics contains several articles relevant to AI and Public Policy.

Jelinek, T., Wallach, W. & Kerimi, D. “Policy brief: the creation of a G20 coordinating committee for the governance of artificial intelligence” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00019-y

This policy brief proposes a group of twenty (G20) coordinating committee for the governance of artificial intelligence (CCGAI) to plan and coordinate on a multilateral level the mitigation of AI risks. The G20 is the appropriate regime complex for such a metagovernance mechanism, given the involvement of the largest economies and their highest political representatives.

Gambelin, O. “Brave: what it means to be an AI Ethicist” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00020-5

This piece offers a preliminary definition of what it means to be an AI Ethicist, first examining the concept of an ethicist in the context of artificial intelligence, followed by exploring what responsibilities are added to the role in industry specifically, and ending on the fundamental characteristic that underlies it all: bravery.

Smith, P., Smith, L. “Artificial intelligence and disability: too much promise, yet too little substance?” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00004-5

Much has been written about the potential of artificial intelligence (AI) to support, and even transform, the lives of disabled people. Many individuals are benefiting, but what are the true limits of such tools? What are the ethics of allowing AI tools to suggest different courses of action, or aid in decision-making? And does AI offer too much promise for individuals? We draw as to how AI software and technology might best be developed in the future.

Coeckelbergh, M. “AI for climate: freedom, justice, and other ethical and political challenges” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00007-2

Artificial intelligence can and should help to build a greener, more sustainable world and to deal with climate change, but these opportunities also raise ethical and political issues that need to be addressed. This article discusses these issues, with a focus on problems concerning freedom and justice at a global level, and calls for responsible use of AI for climate in the light of these challenges.

Hickok, M. “Lessons learned from AI ethics principles for future actions” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00008-1

The use of AI systems is significantly more prevalent in recent years, and the concerns on how these systems collect, use and process big data has also increased. To address these concerns and advocate for ethical and responsible AI development and implementation, NGOs, research centers, private companies, and governmental agencies have published more than 100 AI ethics principles and guidelines. Lessons must be learned from the shortcomings of AI ethics principles to ensure that future investments, collaborations, standards, codes, and legislation reflect the diversity of voices and incorporate the experiences of those who are already impacted by AI.

Fall Nuggets

USTPC Panel on Section 230

On November 18 from 5:00 to 6:30 PM EST, experts from ACM’s US Technology Policy Committee (USTPC) will discuss the legal liability of Internet platforms such as Facebook and Twitter under Section 230 of the Communications Decency Act. The USTPC panelists are Andy Grosso (Moderator), Mark Rasch, Pam Samuelson, Richard M. Sherman, and Danny Weitzner.

Biden and Science

Participants in a Science and Business Webcast urged that a global assembly “should press leaders of the big industrial nations to open – or re-open – their research systems, while also ensuring that COVID-19 vaccines are freely available to everyone in the world. An international summit.” About an international summit, Robert-Jan Smits, former director-general of the European Commission’s research and innovation directorate said it, “would really show that senior leaders are turning the page,”

Center for Data Innovation On the EU Data Governance Act

“The European Commission is planning to release its Data Governance Act to facilitate data sharing within the EU. The goal is to increase data sharing among businesses, make more public-sector data available for reuse, and foster data sharing of personal data, including for ‘altruistic’ purposes. While the goals of the act are commendable, many of the specific policies outlined in a draft would create a new data localization requirement, undermine the EU’s commitments to digital free trade, and contradict its open data principles.”

USTPC in the News

Overview

The ACM’s US Technology Policy Committee (USTPC) has been very active in July already! The contributions and visibility of USTPC as a group and as individual members are very welcome and impressive. The following list has links to highly-recommended reading.

Amicus Brief: USTPC Urges Narrower Definition of Computer Fraud and Abuse Act

ACM’s USTPC filed an amicus curiae (“friend of the court”) brief with the United States Supreme Court in the landmark case of Van Buren v. United States. “Van Buren marks the first time that the US Supreme Court has reviewed the Computer Fraud and Abuse Act (CFAA), a 1986 law that was originally intended to punish hacking. In recent years, however, the CFAA has been used to criminally prosecute both those who access a computer system without permission, as well as those who have permission but exceed their authority to use a database once logged in.”

USTPC Statement on Face Recognition

(USTPC) has assessed the present state of facial recognition (FR) technology as applied by government and the private sector. The Committee concludes that, “when rigorously evaluated, the technology too often produces results demonstrating clear bias based on ethnic, racial, gender, and other human characteristics recognizable by computer systems. The consequences of such bias, USTPC notes, frequently can and do extend well beyond inconvenience to profound injury, particularly to the lives, livelihoods and fundamental rights of individuals in specific demographic groups, including some of the most vulnerable populations in our society.”
See the NBC news article.

Barbara Simons recipient of the 2019 ACM Policy Award

USTPC’s Barbara Simons, founder of USTPC predecessor USACM, is the recipient of the 2019 ACM Policy Award for “long-standing, high-impact leadership as ACM President and founding Chair of ACM’s US Public Policy Committee (USACM), while making influential contributions to improve the reliability of and public confidence in election technology. Over several decades, Simons has advanced technology policy by founding and leading organizations, authoring influential publications, and effecting change through lobbying and public education.”
Congratulations, Barbara!

Potential New Issues

ACM Urges Preservation of Temporary Visa Exemptions for Nonimmigrant Students. Harvard filing is a complaint for declaratory and injunctive relief.

This issue may have dramatic impacts on university research and teaching this fall.

Thank you USTPC for your hard work and representation of ACM to policymakers!

AI and Facial Recognition

AI in Congress

Politico reports on two separate bills introduced Thursday, June 2. (See the section entitled “Artificial Intelligence: Let’s Do the Thing”.)

The National AI Research Resource Task Force Act. “The bipartisan, bicameral bill introduced by Reps. Anna Eshoo, (D-Calif.), Anthony Gonzalez (R-Ohio), and Mikie Sherrill (D-N.J.), along with companion legislation by Sens. Rob Portman (R-Ohio) and Martin Heinrich(D-N.M.), would form a committee to figure out how to launch and best use a national AI research cloud. Public and private researchers and developers from across the country would share this cloud to combine their data, computing power and other resources on AI. The panel would include experts from government, academia and the private sector.”

The Advancing Artificial Intelligence Research Act. “The bipartisan bill introduced by Senate Commerce Chairman Roger Wicker (R-Miss.), Sen. Cory Gardner (R-Colo.) and Gary Peters (D-Mich.), a founding member of the Senate AI Caucus, would create a program to accelerate research and development of guidance around AI at the National Institute of Standards and Technology. It would also create at least a half-dozen AI research institutes to examine the benefits and challenges of the emerging technology and how it can be deployed; provide funding to universities and nonprofits researching AI; and launch a pilot at the National Science Foundation for AI research grants.”

Concerns About Facial Recognition (FR): Discrimination, Privacy, and Democratic Freedom

While including ethical and moral issues, a broader list of issues is concerning to citizens and policymakers about face recognition technology and AI. Areas of concerns include accuracy; surveillance; data storage, permissions, and access; discrimination, fairness, and bias; privacy and video recording without consent; democratic freedoms, including right to choose, gather, and speak; and abuse of technology such as non-intended uses, hacking, and deep fakes. Used responsibly and ethically, face recognition can be valuable for finding missing people, responsible policing and law enforcement, medical uses, healthcare, virus tracking, legal system and court uses, and advertising. Various guidelines by organizations such as the AMA and legislation like S.3284 – Ethical Use of Facial Recognition Act are being developed to encourage the proper use of AI and face recognition.

Some of the above issues do specifically require ethical analysis as in the following by Yaroslav Kuflinski:

Accuracy — FR systems naturally discriminate against non-whites, women, and children, presenting errors of up to 35% for non-white women.

Surveillance issues — concerns about “big brother” watching society.

Data storage — use of images for future purposes stored alongside genuine criminals.

Finding missing people — breaches of the right to a private life.

Advertising — invasion of privacy by displaying information and preferences that a buyer would prefer to keep secret.

Studies of commercial systems are increasingly available, for example an analysis of Amazon Rekognition.

Biases deriving from sources of unfairness and discrimination in machine learning have been identified in two areas: the data and the algorithms.  Biases in data skew what is learned in machine learning methods, and flaws in algorithms can lead to unfair decisions even when the data is unbiased. Intentional or unintentional biases can exist in the data used to train FR systems.

New human-centered design approaches seek to provide intentional system development steps and processes in collecting data and creating high quality databases, including the elimination of naturally occurring bias reflected in data about real people.

Bias That Pertains Especially to Facial Recognition (Mehrabi, et al. and Barocas, et al.)

Direct Discrimination: “Direct discrimination happens when protected attributes of individuals explicitly result in non-favorable outcomes toward them”.  Some traits like race, color, national origin, religion, sex, family status, disability, exercised rights under CCPA , marital status, receipt of public assistance, and age are identified as sensitive attributes or protected attributes in the machine learning world.                       

Indirect Discrimination: Even if sensitive or protected attributes are not used against an individual, indirect discrimination can still happen. For example, residential zip code is not categorized as a protected attribute, but from the zip code one might infer race, which is a protected attribute. So, “protected groups or individuals still can get treated unjustly as a result of implicit effects from their protected attributes”.

Systemic Discrimination: “policies, customs, or behaviors that are a part of the culture or structure of an organization that may perpetuate discrimination against certain subgroups of the population”.

Statistical Discrimination: In law enforcement, racial profiling is an example of statistical discrimination. In this case, minority drivers are pulled over more than compared to white drivers — “statistical discrimination is a phenomenon where decision-makers use average group statistics to judge an individual belonging to that group.”

Explainable Discrimination: In some cases, discrimination can be explained using attributes like working hours and education, which is legal and acceptable. In “the UCI Adult dataset [6], a widely-used dataset in the fairness domain, males on average have a higher annual income than females; however, this is because on average females work fewer hours than males per week. Work hours per week is an attribute that can be used to explain low income. If we make decisions without considering working hours such that males and females end up averaging the same income, we could lead to reverse discrimination since we would cause male employees to get lower salary than females.                             

Unexplainable Discrimination: This type of discrimination is not legal as explainable discrimination because “the discrimination toward a group is unjustified”.

How to Discuss Facial Recognition

Recent controversies about FR mix technology issues with ethical imperatives and ignore that people can disagree on which are the “correct” ethical principles. A recent ACM tweet on FR and face masks was interpreted in different ways and ACM issued an official clarification. A question that emerges is if AI and other technologies should be, and can be, banned rather than controlled and regulated.

In early June, 2020, IBM CEO Arvind Krishna said in a letter to Congress that IBM is exiting the facial recognition business and asking for reforms to combat racism: “IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in his letter to members of congress, “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

The guest co-author of this series of blog posts on AI and bias is Farhana Faruqe, doctoral student in the George Washington University Human-Technology Collaboration program.

COVID AI

AI is in the news and in policy discussions regarding COVID-19, both about ways to help fight the pandemic and in terms of ethical issues that policymakers should address. Michael Corkery and David Gelles in the NY Times article “Robots Welcome to Take Over, as Pandemic Accelerates Automation”, suggest that “social-distancing directives, which are likely to continue in some form after the crisis subsides, could prompt more industries to accelerate their use of automation.” An MIT Technology Review article by Genevieve Bell, “We need mass surveillance to fight covid-19—but it doesn’t have to be creepy” looks at the pros and cons of AI technology and if we now have the chance to “reinvent the way we collect and share personal data while protecting individual privacy.”

Public Health and Privacy Issues

Liza Lin and Timothy W. Martin in “How Coronavirus Is Eroding Privacy” write about how technology is being developed to track and monitor individuals for slowing the pandemic, but that this “raises concerns about government overreach.” Here is an excerpt from that WSJ article: “Governments worldwide are using digital surveillance technologies to track the spread of the coronavirus pandemic, raising concerns about the erosion of privacy. Many Asian governments are tracking people through their cellphones to identify those suspected of being infected with COVID-19, without prior consent. European countries are tracking citizens’ movements via telecommunications data that they claim conceals individuals’ identities; American officials are drawing cellphone location data from mobile advertising firms to monitor crowds, but not individuals. The biggest privacy debate concerns involuntary use of smartphones and other digital data to identify everyone with whom the infected had recent contact, then testing and quarantining at-risk individuals to halt the further spread of the disease. Public health officials say surveillance will be necessary in the months ahead, as quarantines are relaxed and the virus remains a threat while a vaccine is developed.

“In South Korea, investigators scan smartphone data to find within 10 minutes people who might have caught the coronavirus from someone they met. Israel has tapped its Shin Bet intelligence unit, usually focused on terrorism, to track down potential coronavirus patients through telecom data. One U.K. police force uses drones to monitor public areas, shaming residents who go out for a stroll.

“The Covid-19 pandemic is ushering in a new era of digital surveillance and rewiring the world’s sensibilities about data privacy. Governments are imposing new digital surveillance tools to track and monitor individuals. Many citizens have welcomed tracking technology intended to bolster defenses against the novel coronavirus. Yet some privacy advocates are wary, concerned that governments might not be inclined to unwind such practices after the health emergency has passed.

“Authorities in Asia, where the virus first emerged, have led the way. Many governments didn’t seek permission from individuals before tracking their cellphones to identify suspected coronavirus patients. South Korea, China and Taiwan, after initial outbreaks, chalked up early successes in flattening infection curves to their use of tracking programs.

“In Europe and the U.S., where privacy laws and expectations are more stringent, governments and companies are taking different approaches. European nations monitor citizen movement by tapping telecommunications data that they say conceals individuals’ identities.

American officials are drawing cellphone location data from mobile advertising firms to track the presence of crowds—but not individuals. Apple Inc. and Alphabet Inc.’s Google recently announced plans to launch a voluntary app that health officials can use to reverse-engineer sickened patients’ recent whereabouts—provided they agree to provide such information.”

Germany Changes Course on Contact Tracing App

Politico reports that “the German government announced today” (4/26) “that Berlin would adopt a ‘decentralized’ approach to a coronavirus contact-tracing app — now backing an approach championed by U.S. tech giants Apple and Google. ‘We will promote the use of a consistently decentralized software architecture for use in Germany,’ the country’s Federal Health Minister Jens Spahn said on Twitter, echoing an interview in the Welt am Sonntag newspaper. Earlier this month, Google and Apple announced they would team up to unlock their smartphones’ Bluetooth capabilities to allow developers to build interoperable contact tracing apps. Germany is now abandoning a centralized approach spearheaded by the German-led Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project. Berlin’s U-turn comes after a group of six organizations on Friday urged Angela Merkel’s government to reassess plans for a smartphone app that traces potential coronavirus infections, warning that it does not do enough to protect user data.”

NSF Program on Fairness in Artificial Intelligence (FAI) in Collaboration with Amazon

A new National Science Foundation solicitation NSF 20-566 has been announced by the Directorate for Computer and Information Science and Engineering, Division of Information and Intelligent Systems, Directorate for Social, Behavioral and Economic Sciences, and Division of Behavioral and Cognitive Sciences.

Bias, Ethics, and Policy

We are planning a series of posts on Bias, starting with the background and context of bias in general and then focusing on specific instances of bias in current and emerging areas of AI. Ultimately, this information is intended to inform ideas on public policy. We look forward to your comments and suggestions for a robust discussion.

Extensive work “A Survey on Bias and Fairness in Machine Learning” by Ninareh Mehrabi et al. will be useful for the conversation. The guest co-author of the ACM SIGAI Public Policy blog posts on Bias will be Farhana Faruqe, doctoral student in the George Washington University Human-Technology Collaboration program.

A related announcement is about the new section on AI and Ethics in the Springer Nature Computer Science journal. “The AI & Ethics section focuses on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future. It seeks to promote informed debate and discussion of the current and future developments in AI, and the ethical, moral, regulatory, and policy implications that arise from these developments.” As a Co-Editor of the new section, I welcome you to submit a manuscript and contact me with any questions and suggestions.

PCAST and AI Plan

Executive Order on The President’s Council of Advisors on Science and Technology (PCAST)

President Trump issued an executive order on October 22 re-establishing the President’s Council of Advisors on Science and Technology (PCAST), an advisory body that consists of science and technology leaders from the private and academic sectors. PCAST is to be chaired by Kelvin Droegemeier, director of the Office of Science and Technology Policy, and Edward McGinnis, formerly with DOE, is to serve as the executive director. The majority of the 16 members are from key industry sectors. The executive order says that the council is expected to address “strengthening American leadership in science and technology, building the Workforce of the Future, and supporting foundational research and development across the country.” For more information, see the Inside Education article about the first appointments.

Schumer AI Plan

Jeffrey Mervis has a November 11, 2019, article in AAAS News from Science on a recommendation for the government to create a new agency funded with $100 billion over 5 years for basic AI research. “Senator Charles Schumer (D–NY) says the initiative would enable the United States to keep pace with China and Russia in a critical research arena and plug gaps in what U.S. companies are unwilling to finance.”

Schumer gave his ideas publicly in a speech in early November to senior national security and research policymakers following a recent presidential executive order. He wants to create a new national science tech fund looking into “fundamental research related to AI and some other cutting-edge areas” such as quantum computing, 5G networks, robotics, cybersecurity, and biotechnology. Funds would encourage research at U.S. universities, companies, and other federal agencies and support incubators for moving research into commercial products. An additional article can be found in Defense News.

National AI Strategy

The National Artificial Intelligence Research and Development Strategic Plan – an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council – was released in June, 2019, and the President’s, Executive Order 13859 Maintaining American Leadership in Artificial Intelligence was released on February 11. The Computing Community Consortium (CCC) recently released the AI Roadmap Website, and an interesting industry response is “Intel Gets Specific on a National Strategy for AI, “How to Propel the US into a Sustainable Leadership Position on the Global Artificial Intelligence Stage” By Naveen Rao and David Hoffman. Excerpts follow and the accompanying links provide the details:

“AI is more than a matter of making good technology; it is also a matter of making good policy. And that’s what a robust national AI strategy will do: continue to unlock the potential of AI, prepare for AI’s many ramifications, and keep the U.S. among leading AI countries. At least 20 other countries have published, and often funded, their national AI strategies. Last month, the administration signaled its commitment to U.S. leadership in AI by issuing an executive order to launch the American AI Initiative, focusing federal government resources to develop AI. Now it’s time to take the next step and bring industry and government together to develop a fully realized U.S. national strategy to continue leading AI innovation.

“… But to sustain leadership and effectively manage the broad social implications of AI, the U.S. needs coordination across government, academia, industry and civil society. This challenge is too big for silos, and it requires that technologists and policymakers work together and understand each other’s worlds.” Their call to action was released in May 2018.

Four Key Pillars

“Our recommendation for a national AI strategy lays out four key responsibilities for government. Within each of these areas we propose actionable steps. We provide some highlights here, and we encourage you to read the full white paper or scan the shorter fact sheet.

Sustainable and funded government AI research and development can help to advance the capabilities of AI in areas such as healthcare, cybersecurity, national security and education, but there need to be clear ethical guidelines.

Create new employment opportunities and protect people’s welfare given that AI has the potential to automate certain work activities.

Liberate and share data responsibly, as the more data that is available, the more “intelligent” an AI system can become. But we need guardrails.

Remove barriers and create a legal and policy environment that supports AI so that the responsible development and use of AI is not inadvertently derailed.”

AI Race Matters

China, the European Union, and the United States have been in the news about strategic plans and policies on the future of AI. The July 2 AI Matters policy blog post was on the U.S. National Artificial Intelligence Research and Development Strategic Plan, released in June, as an update of the report by the Select Committee on Artificial Intelligence of The National Science & Technology Council. The Computing Community Consortium (CCC) recently released the AI Roadmap Website.
Now, a Center for Data Innovation Report compares the current standings of China, the European Union, and the United States and makes policy recommendations. Here is the report summary: “Many nations are racing to achieve a global innovation advantage in artificial intelligence (AI) because they understand that AI is a foundational technology that can boost competitiveness, increase productivity, protect national security, and help solve societal challenges. This report compares China, the European Union, and the United States in terms of their relative standing in the AI economy by examining six categories of metrics—talent, research, development, adoption, data, and hardware. It finds that despite China’s bold AI initiative, the United States still leads in absolute terms. China comes in second, and the European Union lags further behind. This order could change in coming years as China appears to be making more rapid progress than either the United States or the European Union. Nonetheless, when controlling for the size of the labor force in the three regions, the current U.S. lead becomes even larger, while China drops to third place, behind the European Union. This report also offers a range of policy recommendations to help each nation or region improve its AI capabilities.”