Facial recognition: perfect or flawed?

We are seeing the increased use of facial recognition technology in our everyday lives. It is used to unlock our phones, making life a bit more convenient. It is also used in border control and even to help policing efforts through better surveillance. But how does facial recognition work, and how effective is it really?

Facial recognition technology uses biometrics – physical/behavioural human traits or characteristics that can be used as a form of digital identification, like a fingerprint that can be stored on a computer. This is then stored in a huge dataset that contains the biometrics for hundreds or thousands of other people. These biometrics could be certain facial features and the distance between them like your eyes and nose. Other algorithms are more textural and focus on lines and spots on your face to identify you, and others still can even measure the depth of certain features on your face like your eye sockets. There are a few different techniques that can be used at this point to store and then compare images to identify them. One of these described in The Science of Biometrics : Security Technology for Identity Verification involves putting them on a plot or graph. For example, let’s say an algorithm takes a picture of a face and measures and stores two things: the distance between your mouth and nose, and the distance between your eyes. Once the algorithm has measured this, a mathematical transformation is applied to give these values which can then be plotted on a 2D graph with each of these features being an axis and your face essentially being a coordinate. Let’s apply this to something like unlocking your phone: when you then take a picture to unlock your phone, the algorithm gets these values from your face and compares it on this coordinate system to whatever image(s) it has on its database as a reference for you. If the coordinates match or are really close, the algorithm will think this is you and let you unlock your phone. In reality, hundreds of factors and variables are measured, stored, and given values, so this plot would have a lot more dimensions. Other techniques and mathematical transformations have been developed over the years that can take into account more features and have different prerequisites and limitations.

Once these huge datasets of faces are made, a type of artificial intelligence called neural networks is often employed to do the identification part of facial recognition. Neural networks are an interconnected group or collection of artificial neurons. These neurons are mathematical functions that have an input, an output, and apply that function to whatever information it is given. This is very similar to biological neural networks like our brains. Our brains are a collection of neurons, all connected to other neurons by synapses (the gaps between neurons) which let us do things like solve problems and remember facts. Artificial neural networks are in a sense an attempt to replicate biological neural networks. Neural networks are special in the sense that they can be “trained” with datasets. These neurons could either be simple or complicated functions and when many of them are connected they can perform incredibly complex behaviour and processes. In the context of facial recognition technology, a neural network is given a huge dataset of thousands of faces to train it to correctly identify faces to people. This can result in quite accurate facial recognition, reaching up to 98% accuracy.

Indeed this technology can have a very high accuracy rate, and it does have a good track record, on top of more commercial uses such as unlocking phones and even being used for payments in some countries like China. Police in South Wales have argued that this technology has helped their police a lot in solving crimes.In India, it has been used to help reunite many families with lost children. More recently, in light of the coronavirus pandemic, it has even been used in Russia paired with their surveillance system to help monitor the pandemic by detecting if people are breaking quarantine. 

However, there’s almost no such thing as a “perfect” facial recognition technology. On a smaller scale, these algorithms in cases like unlocking phones can be easily spoofed. A study from a Dutch non-profit organisation found that just holding up a picture of someone’s face is enough to unlock their phones on many smartphone models. The Science of Biometrics : Security Technology for Identity Verification also highlights that this kind of technology has difficulty identifying people if they’ve undergone a significant physical change. For example, if someone who was overweight lost a lot of weight, the AI would struggle to match these 2 pictures up. This is the same for other changes like changing from contact lenses to glasses, facial hair, aging, and the addition/removal of glasses and hats.

On a similar note, neural networks are trained by the data they are given. In other words, they are only as good as the dataset they are given, just like how teaching someone with an outdated curriculum means their knowledge will also be outdated. A major problem that these technologies have is that they can suffer from racial and gender bias as a result of these datasets (and possibly even without – race and gender might just be something current facial recognition technology struggles to classify). A study done by Gender Shades found that the datasets used for 2 facial analysis benchmarks had significantly more lighter-skinned subjects, which as you might imagine would lead to the AI being more likely to correctly identify lighter-skinned people than darker-skinned people. As a result of this, they introduced a 3rd one that was balanced by race and gender, but even upon evaluating all 3 datasets, they found that darker-skinned females were still the most misclassified group, with error rates reaching 34.7%, while lighter-skinned males had a significantly lower chance of being misclassified (with their max error rate being 0.8%). This inherent bias has even been shown in the systems that detect pedestrians in some autonomous vehicles. In countries that use this technology quite extensively for policing and surveillance such as the US, this is a huge problem as this bias and misclassification of certain groups like African Americans are also the people that would be most affected by it. Part of the Black Lives Matter movement voices these frustrations – those who are darker-skinned are already under abuse from law enforcement, and such flawed tech might only further this without restrictions.

Data privacy is also an important issue, as many people are concerned that their pictures are being used in these datasets without their knowledge. As we have discussed, the AI behind this technology needs a huge dataset to train on and identify faces with, perhaps more than what is freely available or that the company already has. Some programs get over this by looking at the huge collection of images of people’s faces on the internet. Clearview AI is a technology company that creates such a dataset with over 3 billion and counting images by collecting images from all over the internet, including social media. This program searches the internet for pictures from sites such as Facebook, Instagram, LinkedIn, etc. (a process called “data scraping”). Clearview AI sells access to this database to law enforcement agencies to aid in investigations. One issue with this procedure is that it means people’s pictures upon being posted online might be used in this database in a way that they don’t have any control over. The existence of such a thing also imposes a problem. If this were to fall into the wrong hands or be hacked and exposed as Clearview AI was in early 2020. This shows the risk of people’s images being used in perhaps harmful ways that they can’t consent to. Such a breach revealed that some commercial entities like Walmart have had access to this database, and are possibly using these images to further increase profit.

Facial recognition technology today isn’t perfect. There’s no denying that it has helped track and apprehend criminals, and for some of us it even adds a bit more convenience to our lives. However, it has its fair share of flaws from a technological and a data privacy perspective. With its inherent racial and gender bias and privacy issues surrounding its use and maintenance, the extent of its use needs to be carefully monitored to prevent possible yet irreparable damage from its misuse.


Image attributions (in order of appearance)
(1) Featured image: Want Festival – CV Dazzle by Pete Woodhead is licensed under CC BY-ND 2.0. 
(2) https://www.pxfuel.com/en/free-photo-jrfba
(3) https://pixabay.com/photos/artificial-intelligence-robot-ai-ki-2167835/

This Post Has One Comment

  1. Jackan

    Face masks are breaking facial recognition lol

Leave a Reply