COVID-19 has accelerated the trend of contactless biometric technology for smooth, seamless, and fast verification. Facial recognition offers a promising solution for efficient and secure authentication, particularly in border control.
Frictionless facial verification procedures that combine convenience, speed, and security are already starting to take place all over the world. For example, installing 10 automated border control gates, or eGates, in 2018 allowed Luxembourg Airport to reduce processing times while maintaining the highest security standards, and thereby deal with a rising number of passengers flowing through the airport. The complete set of border control procedures can now take as little as 14 seconds per person.1
Facial biometric technology uses 2-D or 3-D sensors to capture a face, and then algorithms transform it into digital data. Automated systems allow the identification and verification of an identity to be carried out easily and quickly, without physical interaction. New techniques, such as normalization, better account for profiles at different angles or subtle changes in mood or expression.2 While this increases the accuracy of facial recognition technology, it also raises issues around privacy, consent, and bias. The release of the new passport standard in 2025 means we are now in a crucial period in which the large-scale viability of innovative facial verification technologies must be ascertained, and the ethical considerations evaluated.
Data privacy: verification vs. identification
When considering the ethics of facial recognition, it is important to first clarify the difference between facial verification and facial identification. While facial recognition is the overall term used to identify or verify a person using their facial features, verification involves a one-to-one comparison using a reference (such as a passport) to check that someone is in fact who they claim to be. Identification, on the other hand, is a one-to-many comparison, meaning one image is taken and compared to others. One application of this is in surveillance, where cameras are employed to profile people. For such surveillance matters, people’s private data is often stored in a central database, which can be more susceptible to privacy breaches.
Facial verification, on the other hand, is stored locally in a chip, or on a mobile or other device. This means that governments, airports, or companies cannot access the data, because the information is not stored anywhere else. The use of such biometric technology should begin with consent. In order for border crossings to become faster and more streamlined, many frequent travelers will likely be able to enroll in fast-track programs. However, in the foreseeable future, these will always remain optional, meaning people volunteer to use such fast-track systems and willingly consent to their data sets being used in a particular way to facilitate a faster border crossing.
To further protect citizens’ biometric data, organizations must follow the strict EU General Data Protection Regulation (GDPR) or be compliant with their own federal regulations, which ensure privacy is upheld. Before new technology is approved, it must fulfill certain standards and be officially evaluated by agencies such as the US government’s National Institute of Standards and Technology (NIST) or Germany’s Federal Office for Information Security (BSI). In addition, governments must constantly adapt regulatory frameworks as new technologies or applications emerge. The EU, for example, has actively explored how the eIDAS regulation can be linked with the use of decentralized identifiers (DIDs) or self-sovereign identity – an approach based on the idea that users should be able to create and control their own identity, without relying on any centralized authority.3 Such discussions help to increase transparency and maintain a constantly high level of standards across the industry.
Balancing speed and security
One issue that such standardization raises is that of inclusion and accessibility. Border control often focuses on single passengers over the age of 16. Groups, children, and disabled persons are currently excluded from using fast-track systems. In addition, most facial biometric algorithms adopt the biases that humans transfer to their training bases, meaning that the algorithm may struggle to detect facial features of people from minority groups, or with darker skin. To counteract such racial bias, it is crucial to provide a broad distribution of data with many different ethnicities to the training bases. Organizations are increasingly considering how biometric technology can be fairer and more inclusive. What must be taken into consideration is the most effective and convenient biometric modality for each individual application and the level of security that is needed.
A new EU research project on border control led by Veridos Identity Solutions, called D4FLY, aims to enhance the quality and efficiency of verification at border crossings. Importantly, consent is required from all users, before any of their fast-track systems can be used. D4FLY, which stands for “Detecting Document frauD and iDentity on the fly,” involves a team of 19 partners, including universities, small and medium-sized enterprises, research institutes, and border control authorities. D4FLY evaluates techniques to counter emerging threats in identity verification such as imposter fraud, forgery, and morphed faces. It covers research topics such as 3-D face and iris recognition, the use of smartphones for enhanced traveler verification, as well as the potential benefits and applications of artificial intelligence, machine learning, and computer vision algorithms.
Processes will also become more streamlined with the use of more sophisticated pre-enrollment procedures, something D4FLY is also exploring. This could involve the use of an eVisa or other form of identification to enroll and initiate some type of check prior to a journey. This allows border control authorities to differentiate between low-risk and high-risk passengers – heightening convenience for many while still allowing secure and thorough checks when necessary.
Collaboration between authorities, the private sector, and the general public is crucial to ensure all of the elements regarding ethics of facial recognition are considered and to maximize the efficacy of new biometric technology. Such technology must include privacy by design – security and data protection must be a fundamental consideration when building such technology. Governments must also develop and continually guarantee compliance with high standards and regulation. At the same time, it is crucial for the general public to educate themselves about such technology and understand how their data might be used in different scenarios. With such measures, new facial verification technology and processes have an exciting future to guarantee greater speed and convenience, without compromising on security or ethics.