AI in Congress
Politico reports on two separate bills introduced Thursday, June 2. (See the section entitled “Artificial Intelligence: Let’s Do the Thing”.)
The National AI Research Resource Task Force Act. “The bipartisan, bicameral bill introduced by Reps. Anna Eshoo, (D-Calif.), Anthony Gonzalez (R-Ohio), and Mikie Sherrill (D-N.J.), along with companion legislation by Sens. Rob Portman (R-Ohio) and Martin Heinrich(D-N.M.), would form a committee to figure out how to launch and best use a national AI research cloud. Public and private researchers and developers from across the country would share this cloud to combine their data, computing power and other resources on AI. The panel would include experts from government, academia and the private sector.”
The Advancing Artificial Intelligence Research Act. “The bipartisan bill
introduced by Senate Commerce Chairman Roger Wicker (R-Miss.), Sen. Cory
Gardner (R-Colo.) and Gary Peters (D-Mich.), a founding member
of the Senate AI Caucus, would create a program to accelerate research and
development of guidance around AI at the National Institute of Standards
and Technology. It would also create at least a half-dozen AI research
institutes to examine the benefits and challenges of the emerging
technology and how it can be deployed; provide funding to universities and
nonprofits researching AI; and launch a pilot at the National Science Foundation
for AI research grants.”
Concerns About Facial Recognition (FR): Discrimination, Privacy,
and Democratic Freedom
While including ethical and
moral issues, a broader list of issues is concerning to citizens and policymakers
about face recognition technology and AI. Areas of concerns include accuracy; surveillance;
data storage, permissions, and access; discrimination, fairness, and bias; privacy
and video recording without consent; democratic freedoms, including right to
choose, gather, and speak; and abuse of technology such as non-intended uses,
hacking, and deep fakes. Used responsibly and ethically, face recognition can
be valuable for finding missing people, responsible policing and law
enforcement, medical uses, healthcare, virus tracking, legal system and court
uses, and advertising. Various guidelines by organizations such as the AMA
and legislation like S.3284 – Ethical Use of Facial
Recognition Act are being developed to encourage the proper use of AI and face
recognition.
Some of the above issues do specifically
require ethical analysis as in the following by Yaroslav
Kuflinski:
Accuracy — FR systems
naturally discriminate against non-whites, women, and children, presenting
errors of up to 35% for non-white women.
Surveillance issues — concerns
about “big brother” watching society.
Data storage — use of images
for future purposes stored alongside genuine criminals.
Finding missing people — breaches
of the right to a private life.
Advertising — invasion of
privacy by displaying information and preferences that a buyer would prefer to
keep secret.
Studies of commercial systems are increasingly
available, for example an analysis
of Amazon Rekognition.
Biases deriving from sources
of unfairness and discrimination in machine learning have been identified in
two areas: the data and the algorithms. Biases in data skew what is
learned in machine learning methods, and flaws in algorithms can lead to unfair
decisions even when the data is unbiased. Intentional or unintentional biases
can exist in the data used to train FR systems.
New human-centered design approaches
seek to provide intentional system development steps and processes in
collecting data and creating high quality databases, including the elimination
of naturally occurring bias reflected in data about real people.
Bias That Pertains Especially to Facial Recognition (Mehrabi, et al. and Barocas, et al.)
Direct Discrimination: “Direct discrimination happens
when protected attributes of individuals explicitly result in non-favorable
outcomes toward them”. Some traits like race, color, national origin,
religion, sex, family status, disability, exercised rights under CCPA , marital
status, receipt of public assistance, and age are identified as sensitive
attributes or protected attributes in the machine learning world.
Indirect Discrimination: Even if sensitive or protected
attributes are not used against an individual, indirect discrimination can still
happen. For example, residential zip code is not categorized as a protected
attribute, but from the zip code one might infer race, which is a protected
attribute. So, “protected groups or individuals still can get treated unjustly
as a result of implicit effects from their protected attributes”.
Systemic Discrimination: “policies, customs, or behaviors that are a part of the culture or structure of an organization that may perpetuate discrimination against certain subgroups of the population”.
Statistical Discrimination: In law enforcement, racial
profiling is an example of statistical discrimination. In this case, minority
drivers are pulled over more than compared to white drivers — “statistical
discrimination is a phenomenon where decision-makers use average group
statistics to judge an individual belonging to that group.”
Explainable Discrimination: In some cases, discrimination
can be explained using attributes like working hours and education, which is
legal and acceptable. In “the UCI Adult dataset [6], a widely-used dataset in
the fairness domain, males on average have a higher annual income than females;
however, this is because on average females work fewer hours than males per
week. Work hours per week is an attribute that can be used to explain low
income. If we make decisions without considering working hours such that males
and females end up averaging the same income, we could lead to reverse
discrimination since we would cause male employees to get lower salary than
females.
Unexplainable Discrimination: This type of discrimination is
not legal as explainable discrimination because “the discrimination toward a
group is unjustified”.
How to Discuss
Facial Recognition
Recent controversies about FR mix technology issues with ethical
imperatives and ignore that people can disagree on which are the “correct” ethical
principles. A recent ACM tweet on FR and face masks was interpreted in
different ways and ACM issued an
official clarification. A question that emerges is if AI and other technologies
should be, and can be, banned rather than controlled and regulated.
In early June, 2020, IBM CEO Arvind Krishna said in a letter to
Congress that IBM is exiting the facial recognition business and asking
for reforms to combat racism: “IBM no longer offers general purpose IBM
facial recognition or analysis software. IBM firmly opposes and will not
condone uses of any technology, including facial recognition technology offered
by other vendors, for mass surveillance, racial profiling, violations of basic
human rights and freedoms, or any purpose which is not consistent with our
values and Principles of Trust and Transparency,” Krishna said in his
letter to members of congress, “We believe now is the time to begin a national
dialogue on whether and how facial recognition technology should be employed by
domestic law enforcement agencies.”
The guest co-author of this series of blog posts on AI and
bias is Farhana Faruqe, doctoral student in the George Washington University
Human-Technology Collaboration program.