Facial Recognition: What can possibly go wrong? Featured Image

Facial Recognition: What can possibly go wrong?

Facial recognition technology is being challenged globally, especially in relation to security, privacy and how free it is from bias and prejudice.

With the introduction in 2017 of the Face ID feature on the iPhone X, Apple brought face recognition technology to the forefront for many people. While the development of face recognition technology began in the 1960s, it is the increase in computing power and new device form factors, that have enabled the technology to gain more widespread adoption. Facial recognition today is commonly used to unlock phones, open new digital bank accounts, and even used for identity proofing people at customs in airports. However, it is this link between facial recognition technology and identity that is starting to raise concerns about the use of the technology in relation to security, bias and user privacy.

Recently, a project proposing to use face recognition to identify criminals in Brazil on the São Paulo Metropolitan Subway Company (Metro) was hit with a civil lawsuit, claiming that Metro does not ensure the data privacy and security of some 4 million daily passengers, as reported by ZDNet. The initiative aimed to use high-definition cameras in subway stations to identify offenders, through the integration of facial recognition technology with the police database. The Brazilian Institute of Consumer Protection (IDEC) claims there is a lack of proof of security in the capture of data from subway users, especially children and teenagers, who have special constitutional protection.

This project points to one of the first problems with the use of facial recognition technology, ensuring the security of the data. A person's face is static data, which means it can never be changed. Once this data is in the possession of bad actors, the owner of that data would never be safe using that as a proof of identity again.

Another major issue with the use of facial recognition is false positives when mistakes are made in identifying people, which can cause embarrassment and even injustice. Robert Williams, a resident of the city of Detroit in the United States was wrongly arrested by the city police. They had recently adopted facial recognition software in their operations, which mistakenly identified Williams as a criminal. The Detroit police chief said the technology is not only unreliable, but it cannot fully identify people 95% of the time. The accuracy of facial recognition is dependent on the datasets used to train the AI. Training sets can introduce bias in terms of race, age, and gender, based on the selection of faces included in the training set.

Last but not least, it is challenging to do a privacy-first deployment of facial recognition technology in public spaces. Data collection shouldn't be a pre-requisite to use an essential service such as the metro. Users need to be able to control the data that is collected about them and ideally the data would only be collected after the user's explicit consent.

With the most recent Black Lives Matter protests, several companies have made public statements about restricting their use of facial recognition technology. IBM CEO Arvind Krishna, for example, issued an official statement saying they would not offer, develop or research facial identification technologies, as he is against mass surveillance, racial profiling and violations of the basic rights and freedoms of all people.  

Amazon, also announced it would ban the police from using its facial recognition program for a year, and is pressing for strict standards to be created for the ethical use of technology in the future.  Microsoft has also decided to limit the use of its facial recognition tool by police departments until a law is created to regulate its use. 

With growing awareness of privacy and bias issues, new technologies are increasingly being evaluated to determine if they are protective of user privacy and bias-free. Facial recognition is currently under this scrutiny, and all technologies should expect to be held up to this scrutiny going forward.

_____

At Incognia, privacy is at the core of our company, in fact, our primary company value is to respect and put user privacy first. We have created a digital identity that is based on each user’s unique location behavior history. We focus on encrypting, anonymizing, and protecting the location data we collect and intentionally do not collect additional personally identifiable information. This removes the possibility of linking real-world identity and location, making it inherently bias-free and private. 

Learn more about the Incognia location-based behavioral biometrics solution.

Learn about our behavioral biometrics solution