Confronting Ethical Challenges Remains Key to Advancing Facial-Recognition Technology

By Cary Springfield, International Banker

This article was originally published in the Autumn/November 2021 edition of International Banker

In June, Europe’s privacy watchdogs, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS), joined forces to call for a ban on the use of facial-recognition technology in public spaces. In doing so, the two bodies defied draft European Union (EU) legislation that would allow for the technology to be employed in the interest of public security. As such, it would seem that the fiercely contested debate surrounding the ethics of widespread use of such technology is far from resolved and is likely to remain so for some time to come.

“The EDPB and the EDPS call for a general ban on any use of AI for automated recognition of human features in publicly accessible spaces, such as recognition of faces, gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals,” the two organisations jointly stated. “A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI.”

By now, it has become clear that facial recognition is a powerful technology that has the capability to intrude significantly on privacy and anonymity in certain situations. The proliferation of online images through such social-media platforms as Facebook and Instagram has only expedited the development of the technology, as the accuracy of the recognition algorithms continues to improve rapidly. It has also invited considerable interest from law enforcement and government, while some companies such as Clearview AI have even gone as far as creating a facial-recognition app, compiled by scraping more than three billion images from Facebook, YouTube and millions of other websites, which enables the user to see all public images of any person along with links to where each image was published.

Security seems to be the most important issue for advocates of expanding adoption of facial-recognition technology. And with the COVID-19 pandemic requiring the wearing of masks as a standard precautionary measure, accurately identifying individuals in this current era presents a distinct challenge to law enforcement. Indeed, a July 2020 report by Global Market Insights found that the facial-recognition market size was valued at more than $3 billion in 2019 and is projected to grow at a compound annual growth rate (CAGR) of 18 percent between 2020 and 2026, with the rapid adoption of advanced face-identification systems for security and surveillance mainly driving the market growth. “Deployment of these solutions with camera networks will enable the police to track suspects, thieves or criminals on roads or in crowded places,” the report noted.

And with the COVID-19 pandemic perhaps presenting the most urgent use case of facial-recognition technology to identify those in breach of restrictions, the U.S. Department of Homeland Security (DHS) recently assessed the ability of facial-recognition algorithms to reliably collect and match images of individuals wearing a diverse array of face masks. The results found that without masks, the median system performance demonstrated a 93-percent identification rate, with the best-performing system correctly identifying individuals 100 percent of the time; with masks, the median system performance demonstrated a 77-percent identification rate, with the best-performing system correctly identifying individuals 96 percent of the time; and that that performance can vary greatly between systems. “Based on these results, organizations that need to perform photo ID checks could potentially allow individuals to keep their masks on, thereby reducing the risk of COVID-19 infection,” the test concluded.

But while there are clear, unequivocal advantages to using facial-recognition solutions, the ethical issues and challenges that the technology must address have taken centre stage to date. As the American Civil Liberties Union (ACLU) explained, facial recognition has the potential to substantially limit anonymity, allowing the widespread tracking of the public and facilitating stalking and harassment. “Teens are particularly vulnerable to exploitation because they frequently use new technologies without a full understanding of the long-term consequences of that use,” the ACLU also highlighted.

With such concerns in mind, it is not surprising that the two European privacy bodies are far from being the only ones expressing serious concerns over the technology. “Even if the use of this technology is temporarily interrupted…that doesn’t obviate the threat that this technology poses both in the short and the long term,” Michael Kleinman from Amnesty International told BBC News. “Anyone walking in front of a camera where police departments are running facial recognition—their face can be captured, and they can be identified. That’s Orwellian.”

Perhaps somewhat encouragingly, financial firms are taking the lead in confronting some of the key ethical issues created by facial-recognition technology. In June, for instance, an investor group led by asset manager Candriam Investors Group, the European unit of US financial-services firm New York Life, called on those companies leading the development of the technology to do so ethically. “For investors to be able to fulfil our own responsibility to respect human rights, we call on companies to proactively assess, disclose, mitigate and remediate human rights risks related to their facial recognition products and services,” said Rosa van den Beemt, responsible investment analyst at BMO Global Asset Management and one of the 50 or so members of the investor group, which itself manages a whopping combined $4.5 trillion in assets. Other members of the group include Aviva Investors, Royal London Asset Management, NN Investment Partners and KLP.

“Technology should only ever be used to enhance human, social, and environmental well-being,” was Huawei Technologies Co.’s response. “We encourage a global conversation to develop ethics and governance standards around emerging technologies, and we continue to play our part in this conscious, ongoing, and collaborative effort.”

Nonetheless, as far as the financial-services industry is concerned, facial recognition is experiencing rapid growth in popularity, both as a convenient digital-banking solution and banking-security improvement. The technology is proving especially popular in Asia. According to November 2019 research by data analytics firm iiMedia Research, approximately 118 million Chinese users signed up for facial-recognition payments in 2019, almost double the 61 million users recorded in 2018. The report also expects user numbers to exceed 760 million by 2022, around half of China’s total population.

And Singapore’s biggest lender, DBS Bank, became the first private company in the city-state to adopt the Singpass (Singapore Personal Access) Face Verification system in July 2020. By taking a selfie picture, holders of Singapore citizens’ digital-identity system, Singpass, can sign up for DBS digital-banking services from the convenience of their homes. It thus aims to provide a convenient digital-banking solution during the COVID-19 pandemic. “Amid one of the greatest disruptions ever witnessed in our time, we are more cognisant than ever about the importance of leveraging digital technology to quickly serve up solutions that benefit the wider public,” said Jeremy Soo, the DBS head of consumer banking.

Elsewhere, US banks JPMorgan Chase, HSBC and USAA use Apple’s Face ID to let customers securely log into their mobile-banking apps, and in April, JPMorgan said it was “conducting a small test of video analytic technology with a handful of branches in Ohio”. In 2016, Mastercard launched a “selfie pay” app allowing customers to snap images of themselves using their smartphones to approve online purchases. And last year, Nigeria’s Access Bank launched its facial-biometric payment system to verify customers’ identities and authorize retail transactions, while South Africa’s Standard Bank employs facial recognition through its mobile app to enable several features.

California-based City National Bank will also begin facial-recognition trials in 2022 to identify customers at teller machines and employees at branches, according to Bobby Dominguez, the bank’s chief information security officer. Dominguez has, however, acknowledged that the bank is well aware of the civil-liberty issues surrounding the technology’s use. “We’re never going to compromise our clients’ privacy,” Dominguez told Reuters in April. “We’re getting off to an early start on technology already used in other parts of the world, and that is rapidly coming to the American banking network.”

The ACLU submitted guidelines, “An Ethical Framework for Facial Recognition,” to the U.S. Department of Commerce and the National Telecommunications and Information Administration (NTIA), including a handful of key principles “in order to operate an ethical, privacy-protective facial recognition system”. Such principles include “collection”, whereby an entity must receive informed, written and specific consent from an individual before enrolling him or her in a facial-recognition database; “use”, such that an entity must receive informed, written consent from an individual before using a facial-recognition system or faceprint in a manner not covered by existing consent; and “sharing” that prevents any faceprint or information derived from the operation of a face-recognition system from being sold or shared, except with the informed, written consent of the individual.

Back To TopAmerican Civil Liberties Union (ACLU)City National BankCOVID-19 PandemicDBS BankEuropean Data Protection Board (EDPB)European Data Protection Supervisor (EDPS)European UnionFacial RecognitionSingpassTechnology
Popular Articles