Face recognition R&D has made great progress in recent years and has been prominent in the news. In public policy many are calling for a reversal of the trajectory for FR systems and products. In the hands of people of good will – using products designed for safety and training systems with appropriate data – society and individuals could have a better life. The Verge reports China’s use of unique facial markings of pandas to identify individual animals. FR research includes work to mitigate negative outcomes, as with the Adobe and UC Berkeley work on Detecting Facial Manipulations in Adobe Photoshop: automatic detect when images of faces have been manipulated by use of splicing, cloning, and removing an object.
Intentional and unintentional application of systems that are not designed and trained for ethical use are a threat to society. Screening for terrorists could be good, but FR lie and fraud detection systems may not work properly. The safety of FR is currently an important issue for policymakers, but regulations could have negative consequences for AI researchers. As with many contemporary issues, conflicts arise because of conflicting policies in different countries.
Recent and current legislation is attempting to restrict FR the use and possibly research.
* San Francisco, CA and Somerville, MA, and Oakland, CA, are the first three cities to limit use of FR to identify people.
* “Facial recognition may be banned from public housing thanks to proposed law” – CNET reports that a bill will be introduced to address the issue that “… landlords across the country continue to install smart home technology and tenants worry about unchecked surveillance, there’s been growing concern about facial recognition arriving at people’s doorsteps.”
* The major social media companies are being pressed on “how they plan to handle the threat of deepfake images and videos on their platforms ahead of the 2020 elections.”
* A call for a more comprehensive ban on FR has been launched by the digital rights group Fight for the Future, seeking a complete Federal ban on government use of facial recognition surveillance.
Beyond legislation against FR research and banning certain products, work is in progress to enable safe and ethical use of FR. A more general example that could be applied to FR is the MITRE work The Ethical Framework for the Use of Consumer-Generated Data in Health Care, which “establishes ethical values, principles, and guidelines to guide the use of Consumer-Generated Data for health care purposes.”