Recent and Upcoming Events

Brookings Webinar: Should the Government Play a Role in Reducing Algorithmic Bias?

On March 12, the Center for Technology Innovation at Brookings hosted a webinar on the role of government in identifying and reducing algorithmic biases (see video). Speakers discussed what is needed to prioritize fairness in machine-learning models and how to weed out artificial intelligence models that perpetuate discrimination. Questions included
How do the European Union, U.K., and U.S. differ in their approaches to bias and discrimination?
What lessons can they learn from each other?
Should approaches to AI bias be universally applied to ensure civil and human rights for protected groups?

They observe that “policymakers and researchers throughout the world are considering strategies for reducing biased decisions made by machine-learning algorithms. To date, the U.K. has been the most forward in outlining a role for government in identifying and mitigating biases and their unintended consequences, especially decisions that impact marginalized populations. In the U.S., legislators and policymakers have focused on algorithmic accountability and the explanation of models to ensure fairness in predictive decision making.”

The moderator was Alex Engler, Rubenstein Fellow – Governance Studies.
Speakers and discussants were
Lara Macdonald and Ghazi Ahamat, Senior Policy Advisors – UK Centre for Data Ethics and Innovation;
Nicol Turner Lee, Brookings Senior Fellow – Governance Studies  and Director, Center for Technology Innovation; and
Adrian Weller, Programme Director for AI at the Alan Turing Institute

Algo2021 Conference to Be Held on April 29, 2021

The University College London (Online) will present The Algo2021 Conference: Ecosystems of Excellence & Trust, building upon the success of their 2020 inaugural conference. They will platform all major stakeholders – academia, civil service, and industry – by showcasing the cutting-edge developments, contemporary debates, and perspectives of major players. The 2021 conference theme reflects the desire to promote public good innovation. Sessions and topics include the following:
Machine Learning in Healthcare,
Trust and the Human-on-the-Loop,
Artificial Intelligence and Predictive Policing,
AI and Innovation in Healthcare Technologies,
AI in Learning and Education Technologies,
Building Communities of Excellence in AI, and
Human-AI and Ethics Issues.

Politico’s AI Online Summit on May 31, 2021

The 2021 Summit plans to dissect Europe’s AI legislative package, along with the impact of geopolitical tensions and tech regulations, on topics such as data and privacy concerns. The summit will convene top EU and national decision makers, opinion formers, and tech industry leaders.

“The European Commission will soon introduce legislation to govern the use of AI, acting on its aim to draw up rules for the technology sector over the next five years and on its legacy as the world’s leading regulator of digital privacy.  At the heart of the issue is the will to balance the need for rules with the desire to boost innovation, allowing the old continent to assert its digital sovereignty. On where the needle should be, opinions are divided – and the publication of the Commission’s draft proposal will not be the end of the discussion.”
Issues to be addressed are the following:
How rules may fit broader plans to build European tech platforms that compete globally with other regions;
How new requirements on algorithmic transparency might be viewed by regular people; and
What kind of implementation efforts will be required for startups, mid-size companies and big tech.

The Politico 4th edition of the AI Summit will address important questions in panel discussions, exclusive interviews, and interactive roundtable discussions. Top regulators, tech leaders, startups, and civil society stakeholders will examine the EU’s legislative framework on AI and data flow while tackling uncomfortable questions about people’s fundamental rights, misinformation, and international cooperation that will determine the future of AI in Europe and worldwide.

AI Future

HCAI for Policymakers

“Human-Centered AI” by Ben Shneiderman was recently published in Issues in Science and Technology 37, no. 2 (Winter 2021): 56–61. A timely observation is that Artificial Intelligence is clearly expanding to include human-centered issues from ethics, explainability, and trust to applications such as user interfaces for self-driving cars. The importance of the HCAI fresh approach, which can enable more widespread use of AI in safe ways that promote human control, is acknowledged by the article’s appearance in NAS Issues in Science and Technology. An implication of the article is that computer scientists should build devices to enhance and empower—not replace—humans.

HCAI as described by Prof. Shneiderman represents a radically different approach to systems design by imagining a different role for machines. Envisioning AI systems as comprising machines and people working together is a much different starting point than the assumption and goal of autonomous AI. In fact, a design process with this kind of forethought might even lead to a product not being developed, thus preventing future harm. One of the many interesting points in the NAS Issues article is the observation about the philosophical clash between two approaches to gaining knowledge about the world—Aristotle’s rationalism and Leonardo da Vinci’s empiricism—and the connection with the current perspective of AI developers: “The rationalist viewpoint, however, is dominant in the AI community. It leads researchers and developers to emphasize data-driven solutions based on algorithms.” Data science unfortunately often focuses on the rationalist approach without including the contributions from, and protection of, the human experience.

From the NAS article, HCAI is aligned with “the rise of the concept of design thinking, an approach to innovation that begins with empathy for users and pushes forward with humility about the limits of machines and people. Empathy enables designers to be sensitive to the confusion and frustration that users might have and the dangers to people when AI systems fail. Humility leads designers to recognize the inevitability of failure and inspires them to be always on the lookout for what wrongs are preventable.”

Policymakers need to “understand HCAI’s promise not only for our machines but for our lives. A good starting place is an appreciation of the two competing philosophies that have shaped the development of AI, and what those imply for the design of new technologies … comprehending these competing imperatives can provide a foundation for navigating the vast thicket of ethical dilemmas now arising in the machine-learning space.” An HCAI approach can incorporate creativity and innovation into AI systems by understanding and incorporating human insights about complexity into the design of AI systems and using machines to prepare data for taking advantage of human insight and experience. For many more details and enjoyable reading, go to https://issues.org/human-centered-ai/.

NSCAI Final Report

The National Security Commission on Artificial Intelligence (NSCAI) issued a final report. This bipartisan commission of 15 technologists, national security professionals, business executives, and academic leaders delivered an “uncomfortable message: America is not prepared to defend or compete in the AI era.” They discuss a “reality that demands comprehensive, whole-of-nation action.” The final report presents a strategy to “defend against AI threats, responsibly employ AI for national security, and win the broader technology competition for the sake of our prosperity, security, and welfare.”

The mandate of the National Security Commission on Artificial Intelligence (NSCAI) is to make recommendations to the President and Congress to “advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.” The 16 chapters in the Main Report contain many conclusions and recommendations, including a “Blueprints for Action” with detailed steps for implementing the recommendations.

Face Recognition and Bad Science

FR and Bad Science: Should some research not be done?

Facial recognition issues continue to appear in the news, as well as in scholarly journal articles, while FR systems are being banned and some research is shown to be bad science. AI system researchers who try to associate facial technology output with human characteristics are sometimes referred to as machine-assisted phrenologists. Problems with FR research have been demonstrated in machine learning research such as work by Steed and Caliskan in “A set of distinct facial traits learned by machines is not predictive of appearance bias in the wild.”  Meanwhile many examples of harmful products and misuses have been identified in areas such as criminality, video interviewing, and many others. Some communities have considered bans on FR products.

Yet, journals and conferences continue to publish bad science in facial recognition.

Some people say the choice of research topics is up to the researchers – the public can choose not to use the products of their research. However, areas such as genetic, biomedical, and cyber security R&D do have limits. Our professional computing societies can choose to disapprove research areas that cause harm. Sources of mitigating and preventing irresponsible research being introduced into the public space include:
– Peer pressure on academic and corporate research and development
– Public policy through laws and regulations
– Corporate and academic self-interest – organizations’ bottom lines can
suffer from bad publicity
– Vigilance by journals about publishing papers that promulgate the misuse
of FR

A recent article by Matthew Hutson in The New Yorker discusses “Who should stop unethical AI.” He remarks that “Many kinds of researchers—biologists, psychologists, anthropologists, and so on—encounter checkpoints at which they are asked about the ethics of their research. This doesn’t happen as much in computer science. Funding agencies might inquire about a project’s potential applications, but not its risks. University research that involves human subjects is typically scrutinized by an I.R.B., but most computer science doesn’t rely on people in the same way. In any case, the Department of Health and Human Services explicitly asks I.R.B.s not to evaluate the “possible long-range effects of applying knowledge gained in the research,” lest approval processes get bogged down in political debate. At journals, peer reviewers are expected to look out for methodological issues, such as plagiarism and conflicts of interest; they haven’t traditionally been called upon to consider how a new invention might rend the social fabric.”

OSTP News

OSTP Launches National AI Initiative Office

The White House Office of Science and Technology Policy announced the establishment of the National Artificial Intelligence Initiative Office.  As outlined in legislation, this Office will serve as the point of contact on Federal AI activities across the interagency, as well as with private sector, academia, and other stakeholders. The Select Committee on Artificial Intelligence will oversee the National AI Initiative Office, and Dr. Lynne E. Parker, Deputy United States Chief Technology Officer, will serve as the Founding Director. As explained in Inside Tech Media, the newly enacted National Defense Authorization Act contains important provisions regarding the development and deployment of AI technologies, many of which build upon previous legislation introduced in the 116th Congress, including the establishment of the National AI Initiative Office.

White House Science Team

On January 15, key members of President-Elect Biden’s were announced. The press release says “These diverse, deeply experienced scientists and experts will play a key role in shaping America’s future — and will prepare us to lead the world in the 21st century and beyond.” President-elect Joe Biden said, “Science will always be at the forefront of my administration — and these world-renowned scientists will ensure everything we do is grounded in science, facts, and the truth. Their trusted guidance will be essential as we come together to end this pandemic, bring our economy back, and pursue new breakthroughs to improve the quality of life of all Americans.”
He will nominate Dr. Eric Lander (photo) as Director of the OSTP and to serve as the Presidential Science Advisor. “The president-elect is elevating the role of science within the White House, including by designating the Presidential Science Advisor as a member of the Cabinet for the first time in history.”
Other key members are
Alondra Nelson, Ph.D., OSTP Deputy Director for Science and Society (photo)
Frances H. Arnold, Ph.D., Co-Chair of the President’s Council of Advisors on Science and Technology (photo)
Maria Zuber, Ph.D., Co-Chair of the President’s Council of Advisors on Science and Technology (photo)
Francis S. Collins, M.D., Ph.D., Director of the National Institutes of Health (photo)
Kei Koizumi, OSTP Chief of Staff  (photo)
Narda Jones, OSTP Legislative Affairs Director (photo)

Policy-Related Article from AI and Ethics

Stix, C., Maas, M.M. Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy. AI Ethics (2021). https://doi.org/10.1007/s43681-020-00037-w

AI Policy Nuggets II

What Can Biden Do for Science?

A Science|Business Webcast presented a forum of public and private sector leaders discussing ideas about the need for the president-elect to convene world leaders to re-establish ‘rules of engagement’ on science.

Brookings Webinar on the Future of AI

“On November 17, 2020, the Brookings Institution Center for Technology Innovation hosted a webinar to discuss the future of AI, how it is being deployed, and the policy and legal issues being raised. Speakers explored ways to mitigate possible concerns and how to move forward safely, securely, and in a manner consistent with human values.”

Section 230 Update

Politico reports that “Trump for months has urged Congress to revoke industry legal shield Section 230, while its staunchest critics largely pushed to revamp it instead. But the president’s more drastic call for a total repeal — echoed by Biden for very different reasons — is gaining traction among Republicans in Washington. The NYT reported Thursday that White House chief of staff Mark Meadows has even offered Trump’s support for a must-pass annual defense spending bill if it includes such a repeal.”

The European AI Policy Conference

AI may be the most important digital innovation technology transforming industries around the world.
“Businesses in Europe are at the forefront of some of the latest advancements in the field, and European universities are home to the greatest concentration of AI researchers in the world. Every week, new case studies emerge showing the potential opportunities that can arise from greater use of the technology.” The European AI Policy Conference brings together leading voices in AI from to discuss why European success in AI is important, how the EU compares to other world leaders today, and what steps European policymakers should take to be more competitive in AI. “The European AI Policy Conference is a high-level forum to connect stakeholders working to promote AI in Europe, showcase advances in AI, and promote AI policies supporting its development to EU policymakers and thought leaders.”

Policy Issues from AI and Ethics

The inaugural issue of the new journal AI and Ethics contains several articles relevant to AI and Public Policy.

Jelinek, T., Wallach, W. & Kerimi, D. “Policy brief: the creation of a G20 coordinating committee for the governance of artificial intelligence” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00019-y

This policy brief proposes a group of twenty (G20) coordinating committee for the governance of artificial intelligence (CCGAI) to plan and coordinate on a multilateral level the mitigation of AI risks. The G20 is the appropriate regime complex for such a metagovernance mechanism, given the involvement of the largest economies and their highest political representatives.

Gambelin, O. “Brave: what it means to be an AI Ethicist” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00020-5

This piece offers a preliminary definition of what it means to be an AI Ethicist, first examining the concept of an ethicist in the context of artificial intelligence, followed by exploring what responsibilities are added to the role in industry specifically, and ending on the fundamental characteristic that underlies it all: bravery.

Smith, P., Smith, L. “Artificial intelligence and disability: too much promise, yet too little substance?” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00004-5

Much has been written about the potential of artificial intelligence (AI) to support, and even transform, the lives of disabled people. Many individuals are benefiting, but what are the true limits of such tools? What are the ethics of allowing AI tools to suggest different courses of action, or aid in decision-making? And does AI offer too much promise for individuals? We draw as to how AI software and technology might best be developed in the future.

Coeckelbergh, M. “AI for climate: freedom, justice, and other ethical and political challenges” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00007-2

Artificial intelligence can and should help to build a greener, more sustainable world and to deal with climate change, but these opportunities also raise ethical and political issues that need to be addressed. This article discusses these issues, with a focus on problems concerning freedom and justice at a global level, and calls for responsible use of AI for climate in the light of these challenges.

Hickok, M. “Lessons learned from AI ethics principles for future actions” AI Ethics (2020). https://doi.org/10.1007/s43681-020-00008-1

The use of AI systems is significantly more prevalent in recent years, and the concerns on how these systems collect, use and process big data has also increased. To address these concerns and advocate for ethical and responsible AI development and implementation, NGOs, research centers, private companies, and governmental agencies have published more than 100 AI ethics principles and guidelines. Lessons must be learned from the shortcomings of AI ethics principles to ensure that future investments, collaborations, standards, codes, and legislation reflect the diversity of voices and incorporate the experiences of those who are already impacted by AI.

Fall Nuggets

USTPC Panel on Section 230

On November 18 from 5:00 to 6:30 PM EST, experts from ACM’s US Technology Policy Committee (USTPC) will discuss the legal liability of Internet platforms such as Facebook and Twitter under Section 230 of the Communications Decency Act. The USTPC panelists are Andy Grosso (Moderator), Mark Rasch, Pam Samuelson, Richard M. Sherman, and Danny Weitzner.

Biden and Science

Participants in a Science and Business Webcast urged that a global assembly “should press leaders of the big industrial nations to open – or re-open – their research systems, while also ensuring that COVID-19 vaccines are freely available to everyone in the world. An international summit.” About an international summit, Robert-Jan Smits, former director-general of the European Commission’s research and innovation directorate said it, “would really show that senior leaders are turning the page,”

Center for Data Innovation On the EU Data Governance Act

“The European Commission is planning to release its Data Governance Act to facilitate data sharing within the EU. The goal is to increase data sharing among businesses, make more public-sector data available for reuse, and foster data sharing of personal data, including for ‘altruistic’ purposes. While the goals of the act are commendable, many of the specific policies outlined in a draft would create a new data localization requirement, undermine the EU’s commitments to digital free trade, and contradict its open data principles.”

USTPC in the News

Overview

The ACM’s US Technology Policy Committee (USTPC) has been very active in July already! The contributions and visibility of USTPC as a group and as individual members are very welcome and impressive. The following list has links to highly-recommended reading.

Amicus Brief: USTPC Urges Narrower Definition of Computer Fraud and Abuse Act

ACM’s USTPC filed an amicus curiae (“friend of the court”) brief with the United States Supreme Court in the landmark case of Van Buren v. United States. “Van Buren marks the first time that the US Supreme Court has reviewed the Computer Fraud and Abuse Act (CFAA), a 1986 law that was originally intended to punish hacking. In recent years, however, the CFAA has been used to criminally prosecute both those who access a computer system without permission, as well as those who have permission but exceed their authority to use a database once logged in.”

USTPC Statement on Face Recognition

(USTPC) has assessed the present state of facial recognition (FR) technology as applied by government and the private sector. The Committee concludes that, “when rigorously evaluated, the technology too often produces results demonstrating clear bias based on ethnic, racial, gender, and other human characteristics recognizable by computer systems. The consequences of such bias, USTPC notes, frequently can and do extend well beyond inconvenience to profound injury, particularly to the lives, livelihoods and fundamental rights of individuals in specific demographic groups, including some of the most vulnerable populations in our society.”
See the NBC news article.

Barbara Simons recipient of the 2019 ACM Policy Award

USTPC’s Barbara Simons, founder of USTPC predecessor USACM, is the recipient of the 2019 ACM Policy Award for “long-standing, high-impact leadership as ACM President and founding Chair of ACM’s US Public Policy Committee (USACM), while making influential contributions to improve the reliability of and public confidence in election technology. Over several decades, Simons has advanced technology policy by founding and leading organizations, authoring influential publications, and effecting change through lobbying and public education.”
Congratulations, Barbara!

Potential New Issues

ACM Urges Preservation of Temporary Visa Exemptions for Nonimmigrant Students. Harvard filing is a complaint for declaratory and injunctive relief.

This issue may have dramatic impacts on university research and teaching this fall.

Thank you USTPC for your hard work and representation of ACM to policymakers!

AI and Facial Recognition

AI in Congress

Politico reports on two separate bills introduced Thursday, June 2. (See the section entitled “Artificial Intelligence: Let’s Do the Thing”.)

The National AI Research Resource Task Force Act. “The bipartisan, bicameral bill introduced by Reps. Anna Eshoo, (D-Calif.), Anthony Gonzalez (R-Ohio), and Mikie Sherrill (D-N.J.), along with companion legislation by Sens. Rob Portman (R-Ohio) and Martin Heinrich(D-N.M.), would form a committee to figure out how to launch and best use a national AI research cloud. Public and private researchers and developers from across the country would share this cloud to combine their data, computing power and other resources on AI. The panel would include experts from government, academia and the private sector.”

The Advancing Artificial Intelligence Research Act. “The bipartisan bill introduced by Senate Commerce Chairman Roger Wicker (R-Miss.), Sen. Cory Gardner (R-Colo.) and Gary Peters (D-Mich.), a founding member of the Senate AI Caucus, would create a program to accelerate research and development of guidance around AI at the National Institute of Standards and Technology. It would also create at least a half-dozen AI research institutes to examine the benefits and challenges of the emerging technology and how it can be deployed; provide funding to universities and nonprofits researching AI; and launch a pilot at the National Science Foundation for AI research grants.”

Concerns About Facial Recognition (FR): Discrimination, Privacy, and Democratic Freedom

While including ethical and moral issues, a broader list of issues is concerning to citizens and policymakers about face recognition technology and AI. Areas of concerns include accuracy; surveillance; data storage, permissions, and access; discrimination, fairness, and bias; privacy and video recording without consent; democratic freedoms, including right to choose, gather, and speak; and abuse of technology such as non-intended uses, hacking, and deep fakes. Used responsibly and ethically, face recognition can be valuable for finding missing people, responsible policing and law enforcement, medical uses, healthcare, virus tracking, legal system and court uses, and advertising. Various guidelines by organizations such as the AMA and legislation like S.3284 – Ethical Use of Facial Recognition Act are being developed to encourage the proper use of AI and face recognition.

Some of the above issues do specifically require ethical analysis as in the following by Yaroslav Kuflinski:

Accuracy — FR systems naturally discriminate against non-whites, women, and children, presenting errors of up to 35% for non-white women.

Surveillance issues — concerns about “big brother” watching society.

Data storage — use of images for future purposes stored alongside genuine criminals.

Finding missing people — breaches of the right to a private life.

Advertising — invasion of privacy by displaying information and preferences that a buyer would prefer to keep secret.

Studies of commercial systems are increasingly available, for example an analysis of Amazon Rekognition.

Biases deriving from sources of unfairness and discrimination in machine learning have been identified in two areas: the data and the algorithms.  Biases in data skew what is learned in machine learning methods, and flaws in algorithms can lead to unfair decisions even when the data is unbiased. Intentional or unintentional biases can exist in the data used to train FR systems.

New human-centered design approaches seek to provide intentional system development steps and processes in collecting data and creating high quality databases, including the elimination of naturally occurring bias reflected in data about real people.

Bias That Pertains Especially to Facial Recognition (Mehrabi, et al. and Barocas, et al.)

Direct Discrimination: “Direct discrimination happens when protected attributes of individuals explicitly result in non-favorable outcomes toward them”.  Some traits like race, color, national origin, religion, sex, family status, disability, exercised rights under CCPA , marital status, receipt of public assistance, and age are identified as sensitive attributes or protected attributes in the machine learning world.                       

Indirect Discrimination: Even if sensitive or protected attributes are not used against an individual, indirect discrimination can still happen. For example, residential zip code is not categorized as a protected attribute, but from the zip code one might infer race, which is a protected attribute. So, “protected groups or individuals still can get treated unjustly as a result of implicit effects from their protected attributes”.

Systemic Discrimination: “policies, customs, or behaviors that are a part of the culture or structure of an organization that may perpetuate discrimination against certain subgroups of the population”.

Statistical Discrimination: In law enforcement, racial profiling is an example of statistical discrimination. In this case, minority drivers are pulled over more than compared to white drivers — “statistical discrimination is a phenomenon where decision-makers use average group statistics to judge an individual belonging to that group.”

Explainable Discrimination: In some cases, discrimination can be explained using attributes like working hours and education, which is legal and acceptable. In “the UCI Adult dataset [6], a widely-used dataset in the fairness domain, males on average have a higher annual income than females; however, this is because on average females work fewer hours than males per week. Work hours per week is an attribute that can be used to explain low income. If we make decisions without considering working hours such that males and females end up averaging the same income, we could lead to reverse discrimination since we would cause male employees to get lower salary than females.                             

Unexplainable Discrimination: This type of discrimination is not legal as explainable discrimination because “the discrimination toward a group is unjustified”.

How to Discuss Facial Recognition

Recent controversies about FR mix technology issues with ethical imperatives and ignore that people can disagree on which are the “correct” ethical principles. A recent ACM tweet on FR and face masks was interpreted in different ways and ACM issued an official clarification. A question that emerges is if AI and other technologies should be, and can be, banned rather than controlled and regulated.

In early June, 2020, IBM CEO Arvind Krishna said in a letter to Congress that IBM is exiting the facial recognition business and asking for reforms to combat racism: “IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in his letter to members of congress, “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

The guest co-author of this series of blog posts on AI and bias is Farhana Faruqe, doctoral student in the George Washington University Human-Technology Collaboration program.

COVID AI

AI is in the news and in policy discussions regarding COVID-19, both about ways to help fight the pandemic and in terms of ethical issues that policymakers should address. Michael Corkery and David Gelles in the NY Times article “Robots Welcome to Take Over, as Pandemic Accelerates Automation”, suggest that “social-distancing directives, which are likely to continue in some form after the crisis subsides, could prompt more industries to accelerate their use of automation.” An MIT Technology Review article by Genevieve Bell, “We need mass surveillance to fight covid-19—but it doesn’t have to be creepy” looks at the pros and cons of AI technology and if we now have the chance to “reinvent the way we collect and share personal data while protecting individual privacy.”

Public Health and Privacy Issues

Liza Lin and Timothy W. Martin in “How Coronavirus Is Eroding Privacy” write about how technology is being developed to track and monitor individuals for slowing the pandemic, but that this “raises concerns about government overreach.” Here is an excerpt from that WSJ article: “Governments worldwide are using digital surveillance technologies to track the spread of the coronavirus pandemic, raising concerns about the erosion of privacy. Many Asian governments are tracking people through their cellphones to identify those suspected of being infected with COVID-19, without prior consent. European countries are tracking citizens’ movements via telecommunications data that they claim conceals individuals’ identities; American officials are drawing cellphone location data from mobile advertising firms to monitor crowds, but not individuals. The biggest privacy debate concerns involuntary use of smartphones and other digital data to identify everyone with whom the infected had recent contact, then testing and quarantining at-risk individuals to halt the further spread of the disease. Public health officials say surveillance will be necessary in the months ahead, as quarantines are relaxed and the virus remains a threat while a vaccine is developed.

“In South Korea, investigators scan smartphone data to find within 10 minutes people who might have caught the coronavirus from someone they met. Israel has tapped its Shin Bet intelligence unit, usually focused on terrorism, to track down potential coronavirus patients through telecom data. One U.K. police force uses drones to monitor public areas, shaming residents who go out for a stroll.

“The Covid-19 pandemic is ushering in a new era of digital surveillance and rewiring the world’s sensibilities about data privacy. Governments are imposing new digital surveillance tools to track and monitor individuals. Many citizens have welcomed tracking technology intended to bolster defenses against the novel coronavirus. Yet some privacy advocates are wary, concerned that governments might not be inclined to unwind such practices after the health emergency has passed.

“Authorities in Asia, where the virus first emerged, have led the way. Many governments didn’t seek permission from individuals before tracking their cellphones to identify suspected coronavirus patients. South Korea, China and Taiwan, after initial outbreaks, chalked up early successes in flattening infection curves to their use of tracking programs.

“In Europe and the U.S., where privacy laws and expectations are more stringent, governments and companies are taking different approaches. European nations monitor citizen movement by tapping telecommunications data that they say conceals individuals’ identities.

American officials are drawing cellphone location data from mobile advertising firms to track the presence of crowds—but not individuals. Apple Inc. and Alphabet Inc.’s Google recently announced plans to launch a voluntary app that health officials can use to reverse-engineer sickened patients’ recent whereabouts—provided they agree to provide such information.”

Germany Changes Course on Contact Tracing App

Politico reports that “the German government announced today” (4/26) “that Berlin would adopt a ‘decentralized’ approach to a coronavirus contact-tracing app — now backing an approach championed by U.S. tech giants Apple and Google. ‘We will promote the use of a consistently decentralized software architecture for use in Germany,’ the country’s Federal Health Minister Jens Spahn said on Twitter, echoing an interview in the Welt am Sonntag newspaper. Earlier this month, Google and Apple announced they would team up to unlock their smartphones’ Bluetooth capabilities to allow developers to build interoperable contact tracing apps. Germany is now abandoning a centralized approach spearheaded by the German-led Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project. Berlin’s U-turn comes after a group of six organizations on Friday urged Angela Merkel’s government to reassess plans for a smartphone app that traces potential coronavirus infections, warning that it does not do enough to protect user data.”

NSF Program on Fairness in Artificial Intelligence (FAI) in Collaboration with Amazon

A new National Science Foundation solicitation NSF 20-566 has been announced by the Directorate for Computer and Information Science and Engineering, Division of Information and Intelligent Systems, Directorate for Social, Behavioral and Economic Sciences, and Division of Behavioral and Cognitive Sciences.