A multicolored animated clip showing a microprocessor, illuminated with the letters AI.
 
#Identity Technology

Three AI trends reshaping the identity life cycle

Trends
7 Mins.

The widespread experimentation with generative artificial intelligence (AI) tools like ChatGPT over the past year has certainly raised both public and commercial awareness of the potential of AI. In many fields of business and government, though, different strands of AI – including deep learning, machine learning, natural language processing, and computer vision – have been progressively making their way into products and services for some time. 

Over the past decade in the world of identity management, AI has become a central – if low-profile – presence. Machine learning techniques making use of neural networks have been applied to improve the effectiveness of real-time biometric authentication and to spot fraudulent documents. 

They are used in ID enrollment processes to detect manual errors or duplicate entries in databases. And they have helped to ensure the production quality of ID documents, such as passports, and the 50-plus security features that they might contain. The result: faster, more efficient, and more secure identity solutions for government agencies and an improved user experience for citizens.

But that just represents the beginning for this revolutionary technology, says Letizia Bordoli, AI Lead and Senior AI Product Manager at Veridos. “AI presents far-reaching opportunities in the identities sphere. It can bring a wealth of benefits to improve efficiency, secure processes, and enhance privacy and convenience for users. But AI can also bring a lot of challenges and threats as a result of its use for malicious intent,” she says.

Among the many areas of AI’s application in identities, Bordoli highlights three major – and overlapping – trends that stand out and will have a big impact on business, society, and individuals in the near future: 

  • Synthetic data
  • Privacy-preserving AI
  • Multimodal AI

Trend 1: Synthetic data – enhancing authentication, exposing fakes

Psychedelic illustration showing the view from a busy airport lounge, with an airplane taking off against the backdrop of the sun

Training AI in biometric identification

“An AI model is only going to be as good as the data it is trained on,” says Bordoli. But sometimes gathering sufficient quantities of quality data is simply not practical – or ethical. Fortunately, AI itself, in the form of generative AI, is capable of creating synthetic data that can then be used to train models, including those used in biometric-based identity.

Face recognition provides a good example. With a participant’s permission, generative AI can create hyper-realistic images of a person’s face from a single supplied image, presenting the face in multiple scenarios: at different angles, in poor light, wearing glasses and other accessories, with different facial hair, and so on. That scope of data helps in, say, border control to check that the genuine individual is present, despite the variation from their original passport image. “The more different scenarios the system sees within the use-case scenario, the more robust it will be once it’s deployed in the field,” says Bordoli.

That one-from-many approach can also be applied in fraud detection. There, a team might have only one example of a particular fraud but can extrapolate from that and populate a model with synthetic frauds without having to wait until someone attempts them. The model then builds on these and generates further scenarios, growing in robustness.

Creating synthetic identities to anonymize images and protect privacy

AI can be used to take identifiable features out of images/video streams featuring people – in real time. For example, when training a model to monitor an airport to detect if someone has left a bag unattended, it is possible to preserve the privacy of everyone in the CCTV images of the training dataset by using AI to change the faces observed to those of people who simply don’t exist. This preserves the privacy of individuals while maintaining the quality of the data, resulting in a trained model that is more robust. 

Confronting AI-generated impostors

Generative AI provides a powerful platform for fraudsters to create false but convincing images and messages, including voice messages and video “deep fakes” that build on an original media clip.

“Generative AI is probably being adopted as fast by fraudsters as it is by businesses,” says Bordoli. “In the last year, a lot of content has emerged that can look and sound genuine.” She points to an example: someone may receive a voicemail from a friend, colleague, or relative – a message that sounds exactly like them – asking for money to be sent immediately in order to deal with a crisis situation.

Such deception is relevant not only for individuals: many organizations have development business processes that rely on voice or face recognition, such as a bank that allows a customer to open a bank account via a mobile phone. Furthermore, at a state level, such activities even have the potential to impact national security. For example, in the case of a fake video of a country’s leaders declaring a state of emergency.

Authentication agencies can confront at least some of those threats with advanced technologies that detect whether the real person is present. Veridos’s VeriCHECK SelfKiosk self-service border control systems, for example, include presentation attack detection capabilities to determine if the biometrics provided by a traveler are genuine or somebody trying to fool the system by wearing a mask or presenting a photo at the check point.

“AI presents far-reaching opportunities in the identities sphere. It can bring a wealth of benefits to improve efficiency, secure processes, and enhance privacy and convenience for users.“
Letizia Bordoli
AI Lead, Senior AI Product Manager, Veridos

Trend 2: AI strategies for preserving privacy

Data is essential for AI. Going forward, organizations of all kinds will need access to sufficiently large quantities of high-quality data to ensure the AI systems they deploy on live data can be trusted when they are tasked with recognizing patterns, making predictions, taking decisions, and so on. 

This focus on data is only one step toward creating such “responsible AI.” There also needs to be a clear understanding of how algorithms will behave in the field before they are deployed.

There are already many well-documented instances where AI algorithms have shown highly negative biases, in particular against different ethnic groups and genders. There are also numerous cases of companies engaging in non-ethical data gathering activities where individuals don’t fully realize where their data might be used (for example, when they sign up to a website). 

One of the major responses to such concerns is the European Union’s Artificial Intelligence Act, which proposes to regulate AI and establish a legal framework that classifies AI systems by their risk level. The Act, which is expected to be adopted in 2024, mandates various requirements for AI development and use, including prohibiting certain use cases, such as surveillance or social scoring of people.

“The Act is all about creating trustworthy AI,” says Bordoli. The big benefit for both industry and society is that by tightening the rules, AI systems will become more trusted, she adds. And the EU is not alone in its thinking: Canada and the UK, among others, are also developing regulatory frameworks for AI use.

As a leader in identity solutions, Veridos places a high priority on trustworthy AI and collaborates widely in this area with both academic and industry partners. “Trustworthy AI is a cornerstone of our work. We aim to ensure that our AI systems function in a manner that is fair, transparent, and respects human autonomy,” says Bordoli.

To address bias, Veridos invests in the continuous training of its AI models and the diversification of training data. It also employs rigorous testing methodologies to identify and rectify any inherent bias before a system is deployed. 

At the same time, the company has a partnership with the renowned University of Erlangen-Nuremberg to research state-of-the-art methodologies for trustworthy AI and to develop a holistic implementation across the complete AI life cycle. With that in mind, the partnership has identified more than 40 quantitative and qualitative metrics to assess the trustworthiness of AI systems.

Veridos privacy-preserving AI solutions are designed to both protect user data and ensure data sovereignty. A key technology it leverages in that area is Federated Learning. This innovative approach to machine learning allows an algorithm to be trained across multiple decentralized devices or servers, each holding a local dataset, without the data itself being exchanged. In essence, Federated Learning ensures that data remains in its original location, thereby preserving data sovereignty and minimizing the risk of privacy breaches.

For example, using a federated learning system, a healthcare provider could make patient data available to help train a new AI-based cancer diagnostics machine, while ensuring the absolute privacy of that patient data.

An illustration of a young woman and suitcase in a departure lounge, holding up a travel document

Trend 3: Multimodal AI – combinations that strengthen identity

Multimodal AI promises to be highly useful in strengthening identity verification. To date, AI models are usually restricted to a single modality – one model understands images, another understands text, and so on. Today’s identity verification systems, for example, are usually based on a single biometric such as face, iris, or fingerprint. But going forward, a single model might be able to make sense of different types of data simultaneously. For example, they could analyze and process the combination of a person’s photo and voice as they go through border control. 

“In many ways multimodal AI is what an immigration officer is doing when they manually inspect a passport,” says Bordoli. “They might look at a combination of data – the photo, the date of birth, the gender, the height – and compare that information against the person standing in front of them.”

In a similar way, by using multimodal AI to analyze images, audio, and text, they would potentially be able to detect anomalies on a much broader scale than with just one modality, she says – making life much more difficult for identity fraudsters. 

Multimodal AI is still a few years away from being widely implemented, she points out. But such advanced R&D underscores the fact that the use of AI in managing and securing identities is accelerating fast

Key takeaways

  1. AI has far-reaching applications in the identities sphere that can support greater efficiency, privacy, convenience, and more.
  2. While many companies are exploring how they can use AI, they need to find ways to train systems without compromising individual user privacy or proprietary data.
  3. Multiple modes of AI can be used in combination, strengthening capabilities in areas such as identity authentication.
  1. Artificial Intelligence Act (Proposal), European Commission, 2021

Published: 08/02/2024

Share this article

Subscribe to our newsletter

Don’t miss out on the latest articles in G+D SPOTLIGHT: by subscribing to our newsletter, you’ll be kept up to date on latest trends, ideas, and technical innovations – straight to your inbox every month.

Please supply your details: