Could facial recognition technologies spell the end of individual privacy?

Facial-recognition technology is unregulated and yet to be proven. But already it is being installed and used at increasing numbers of locations across the United States, the UK, and beyond.

Facial Recognition is a technology capable of identifying or verifying a person from a digital image or a video frame from a video source. There are multiple methods in which facial recognition systems work, but they generally work by comparing selected facial features from a given image with faces stored in a database.

It is also described as a Biometric Artificial Intelligence based application that can uniquely identify a person by analyzing patterns based on the person’s facial textures and shape. Although the accuracy of facial recognition as a biometric technology is lower than iris recognition and fingerprint recognition, it is widely adopted due to its contactless and non-invasive process.

Now, this might sound fine and dandy, when organizations justify the use of this technology as a bona fide security or efficiency solution. But, with data privacy regulators like the UK’s Information Commissioner’s Office (ICO) becoming increasingly concerned, could there be a dark side to this rapidly-evolving surveillance technology?

Facial recognition systems have all the hallmarks of a “convenience trap” – a privacy-invading technology that has been slowly inching its way into our every-day lives since the 1970’s.

Back in the 1970s, Goldstein, Harmon, and Lesk used 21 specific subjective markers such as hair color and lip thickness to automate facial-recognition. The only problem with both of these solutions was that the parameters still had to be manually computed.

Today, with major advancements in 3-D recognition and skin texture analysis, facial recognition systems have substantially higher rates of success than their early forerunners. It is relatively insensitive to changes in expression, including blinking, frowning or smiling, and has the ability to compensate for facial hair growth and the wearing of glasses. Today’s modern recognition systems are also uniform with respect to race and gender, according to manufacturers.

According to an investigation by civil liberties group Big Brother Watch, facial recognition systems are being extensively deployed at privately owned locations across the United States and in the UK.

The group found an “epidemic” of this controversial technology across major property developers, shopping centres, museums, conference centres and casinos in the UK.

The investigation uncovered live facial recognition in Sheffield’s major shopping centre, Meadowhall.

Big Brother Watch said it also found the Millennium Point conference centre in Birmingham was using facial-recognition surveillance “at the request of law enforcement”. In the privacy policy on Millennium Point’s website, it confirms it does “sometimes use facial recognition software at the request of law enforcement authorities”.

Airport Facial Scanning: A Privacy Trap?

As you read this article, major airlines in the United States, together with the U.S. government, are scanning the faces of individuals who are not suspected of committing any crime. It’s America’s biggest step yet to normalize treating our faces as data that can be stored, tracked and, inevitably, stolen, according to Geoffrey A. Fowler, Technology columnist for The Washington Post.

Fowler reports, “At JetBlue “e-gates” and earlier prototypes, the airline has scanned 150,000 faces in the past two years to verify international travelers before they board.

In Atlanta, Delta has an entire “biometric terminal” that uses your face at check in, bag drop, security and boarding. It says the scans help board international flights nine minutes faster, saving two seconds per passenger.

For now, airport facial recognition is focused on international travelers and is voluntary. Or, rather, U.S. citizens have the right to opt out.

But airports are stressful places where many of us are inclined to trade all sorts of liberties for the promise of safety or expedience. As one passenger boarding at Gate 18 told [Fowler] “I don’t care if you need to strip me naked, so long as it gets me onto that plane and makes us safe.”

Reality Check

So far as we can make out, the reality appears to be that facial recognition technology in airports has little to do with increasing security on flights. As Fowler points out, passengers are already screened for security by humans and machines – and face-scanning systems have to rely on human checks more often than officials care to admit.

Meanwhile, across the pond, London Gatwick is the UK’s first airport to commit to the permanent use of facial-recognition technology for passenger ID checks prior to boarding. The action follows a self-boarding trial carried out with budget airline EasyJet in 2018.

A spokesperson for the airport said the technology should reduce queuing times but travelers would still need to carry passports.

… Not surprisingly, privacy advocates are deeply concerned.

A spokeswoman for Gatwick told BBC News it had taken the decision, first reported by the Telegraph newspaper, after reviewing feedback from passengers in the earlier test. She said:

More than 90% of those interviewed said they found the technology extremely easy to use and the trial demonstrated faster boarding of the aircraft for the airline and a significant reduction in queue time for passengers, … Gatwick [is now planning] a second trial in the next six months and then rolling out auto-boarding technology on eight departure gates in the North Terminal when it opens a new extension to its Pier 6 departure facility in 2022.

She added passengers would still need to pass through the bag-check security zone, at which point they would need to present a boarding pass.

In addition, they would need to scan their passport at the departure gate for the system to be able to match the photo inside to their actual face. The process is similar to that already used at the ePassport arrival gates at some UK airports.

Concerns about Consent

Even so, one particular civil liberties group is worried travelers might not realize they can opt out.

Ioannis Kouvakas, from Privacy International said:

Our main concern… would be the issue of proper consent, … Placing general or vague signs that merely let individuals know that this technology is being deployed, once individuals are already inside the check-in area, is inadequate, in our view, to satisfy the strict transparency and consent requirements imposed by data-protection laws.

She added:

If this would apply to child travelers… it raises even more concerns, considering the special protection afforded to children’s privacy and the risks associated with having their biometrics taken by the airport private entities.

A spokeswoman for Gatwick said it had designed its use of the technology to be “compliant with all data protection law” and passengers would be able to choose to have their passports checked by human staff.

Of course, the airlines have their own reasons for utilizing facial recognition technology – namely, speed.

Greater efficiency could possibly lead to happier customers and even cost reductions. If passengers can check their own luggage and board on their own, there is less rote work for employees, who can be reassigned to interact with customers elsewhere… or made redundant.

But what worries civil libertarians in the U.S. the most is that airports are scanning the faces of everyone, including U.S. citizens.

“If we give in to this, we are allowing the government and the airlines to build up giant face-recognition databases of all of us,” says Jennifer Lynch, the surveillance litigation director at the Electronic Frontier Foundation.

Organizations “must comply with the law”

And it’s not just the airlines that want to deploy the facial recognition tech.

Last month (August, 2019) it emerged the privately owned Kings Cross estate – location of central London’s King’s Cross Station – is using facial recognition, and Canary Wharf is considering following suit.

Information Commissioner Elizabeth Denham swiftly launched an investigation, saying she remains “deeply concerned about the growing use of facial recognition technology in public spaces, not only by law enforcement agencies but also increasingly by the private sector”.

Statement from Elizabeth Denham, Information Commissioner, on the use of live facial recognition technology at King’s Cross, London:

Scanning people’s faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all. That is especially the case if it is done without people’s knowledge or understanding.

 

I remain deeply concerned about the growing use of facial recognition technology in public spaces, not only by law enforcement agencies but also increasingly by the private sector. My office and the judiciary are both independently considering the legal issues and whether the current framework has kept pace with emerging technologies and people’s expectations about how their most sensitive personal data is used.

 

Facial recognition technology is a priority area for the ICO and when necessary, we will not hesitate to use our investigative and enforcement powers to protect people’s legal rights.

We have launched an investigation following concerns reported in the media regarding the use of live facial recognition in the King’s Cross area of central London, which thousands of people pass through every day.

 

As well as requiring detailed information from the relevant organisations about how the technology is used, we will also inspect the system and its operation on-site to assess whether or not it complies with data protection law.

Put simply, any organisations wanting to use facial recognition technology must comply with the law – and they must do so in a fair, transparent and accountable way. They must have documented how and why they believe their use of the technology is legal, proportionate and justified.

We support keeping people safe but new technologies and new uses of sensitive personal data must always be balanced against people’s legal rights.

In another controversial case, the UK’s Metropolitan Police’s use of the technology was recently slammed as highly inaccurate and “unlawful”, according to an independent report by researchers from the University of Essex.

Conclusion (Opinion)

Facial recognition technology has rapidly evolved from “Star Trek” entertainment to a reality. Over the past decade organizations have been literally falling over themselves to deliver facial recognition products.

The technology has become a familiar feature of everyday life, as we unlock our phones, enter our workplaces and homes, or board an aircraft, simply by looking into the lens of a camera.

But here’s the thing. While governments and law enforcement agencies around the world deploy invasive and controversial monitoring products in order to combat crime, airports, rail stations, and even rock concerts and museums are also installing face recognition cameras.

So, how will this potential “surveillance epidemic” impact the privacy of regular law-abiding citizens who are simply trying to go about their daily business?

As data privacy regulators keep a beady eye on the users of this tech, and express their concerns every time another facial recognition system goes live in a public place, or privately-owned location, the question must be asked…

…Could compliance with data privacy laws become one rule for some, and another rule for others?

In other words, with so little regulation, could facial recognition technologies spell the end of individual privacy?

 

Sources and acknowledgements: Wikipedia, The Washington Post, TheRegister, ICO,

Contact the author
Peter Borner
Executive Chairman and Chief Trust Officer

As Co-founder, Executive Chairman and Chief Trust Officer of The Data Privacy Group, Peter Borner leverages over 30 years of expertise to drive revenue for organisations by prioritising trust. Peter shapes tailored strategies to help businesses reap the rewards of increased customer loyalty, improved reputation, and, ultimately, higher revenue. His approach provides clients with ongoing peace of mind, solidifying their foundation in the realm of digital trust.

Specialises in: Privacy & Data Governance

Peter Borner
Executive Chairman and Chief Trust Officer

As Co-founder, Executive Chairman and Chief Trust Officer of The Data Privacy Group, Peter Borner leverages over 30 years of expertise to drive revenue for organisations by prioritising trust. Peter shapes tailored strategies to help businesses reap the rewards of increased customer loyalty, improved reputation, and, ultimately, higher revenue. His approach provides clients with ongoing peace of mind, solidifying their foundation in the realm of digital trust.

Specialises in: Privacy & Data Governance

Contact Our Team Today
Your confidential, no obligation discussion awaits.