FR and Bad Science: Should some research not be done?
Facial recognition issues continue to appear in the news, as well as in scholarly journal articles, while FR systems are being banned and some research is shown to be bad science. AI system researchers who try to associate facial technology output with human characteristics are sometimes referred to as machine-assisted phrenologists. Problems with FR research have been demonstrated in machine learning research such as work by Steed and Caliskan in “A set of distinct facial traits learned by machines is not predictive of appearance bias in the wild.” Meanwhile many examples of harmful products and misuses have been identified in areas such as criminality, video interviewing, and many others. Some communities have considered bans on FR products.
Yet, journals and conferences continue to publish bad science in facial recognition.
Some people say the choice of research topics is up to the researchers – the public can choose not to use the products of their research. However, areas such as genetic, biomedical, and cyber security R&D do have limits. Our professional computing societies can choose to disapprove research areas that cause harm. Sources of mitigating and preventing irresponsible research being introduced into the public space include:
– Peer pressure on academic and corporate research and development
– Public policy through laws and regulations
– Corporate and academic self-interest – organizations’ bottom lines can
suffer from bad publicity
– Vigilance by journals about publishing papers that promulgate the misuse
of FR
A recent article by Matthew Hutson in The New Yorker discusses “Who should stop unethical AI.” He remarks that “Many kinds of researchers—biologists, psychologists, anthropologists, and so on—encounter checkpoints at which they are asked about the ethics of their research. This doesn’t happen as much in computer science. Funding agencies might inquire about a project’s potential applications, but not its risks. University research that involves human subjects is typically scrutinized by an I.R.B., but most computer science doesn’t rely on people in the same way. In any case, the Department of Health and Human Services explicitly asks I.R.B.s not to evaluate the “possible long-range effects of applying knowledge gained in the research,” lest approval processes get bogged down in political debate. At journals, peer reviewers are expected to look out for methodological issues, such as plagiarism and conflicts of interest; they haven’t traditionally been called upon to consider how a new invention might rend the social fabric.”