3D simulation of a woman's head consisting of many different colored data points
 
#Identity Technology

Explainable AI has a lot to offer border forces

Global Trends
6 Mins.

Putting all of the data that is available to governments to good use is easier said than done. But developments in AI mean that there are real opportunities available to ensure borders are safe and open.

The world is once more opening up for travel. As passenger volumes increase, the question of how to manage security at external borders becomes more pressing. There is a tension, which must be resolved, between quickly processing people who pose no risk and making sure that individuals of concern are identified accurately.

Developments in artificial intelligence (AI) systems mean that this technology has an important role to play in solving this problem. It has the ability to sift through large amounts of data that helps identify possibly risky patterns of behavior.

However, AI’s deployment in the public sphere brings challenges as well as benefits. In Europe, there is both a legal and a moral obligation to have some level of transparent decision-making, which means that the use of artificial intelligence technology comes with requirements around “explainability.”

Specifically, proposed regulations say that “high-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.”1 The regulations set out a risk framework for AI, ranging from “minimal risk” for AI deployed in things like spam filters or video games, to “unacceptable risk” for AI that is designed to manipulate people’s free will. The use of AI in a border control setting is considered “high risk” and will be subject to strict obligations.

Explainable artificial intelligence, explained

“Black box AI” is a term that refers to an AI system that does not make its inputs and operations visible to the user or developer. It is very typical of deep learning modeling: the term “deep” refers to the number of layers through which data is transformed from input to output. In the case of complex deep neural networks, with a lot of data and a lot of operations, it is not possible to understand how an algorithm came to a result.

Explainable AI (XAI) is an approach that defines a set of processes and methods used to describe an AI model. In XAI, an AI system needs to follow the three principles of transparency, interpretability, and being explainable. These are seen as essential for services where public trust is at stake.

However, it is a nascent field, and there are several different approaches to XAI. One approach can focus on making sure that one of three aspects is well understood and that the processes by which the AI systems come to a decision can be articulated to an end user.

These three aspects are:

  1. An understanding of the XAI model in its entirety; or 

  2. An understanding of every component part of the XAI model and how they work together; or

  3. The training processes and methodology that were used by the XAI’s algorithms. 

Explainable AI systems at the frontier

There are many different use cases for the use of artificial intelligence in the border control setting.

These are not just limited to the use of facial recognition systems, which are already a familiar feature of border crossings. The possibilities to deploy AI are much wider and fall into categories such as fraud detection and knowledge management.

Fraud detection AI systems focus on identifying individuals who are of concern based on multiple data points, such as historic patterns of travel, advanced passenger information, and Interpol records

Knowledge management features in use cases relating to analyzing big data and forecasting. For example, estimating the number of travelers more accurately helps with planning the number of border control guards needed for the day.  

Artificial intelligence: Scanning a woman’s face

Many of the use cases in border control benefit from the use of computer vision, a branch of AI, as a technology. Computer vision explores how computers can gain high-level understanding from digital images or videos. Computer vision can be used for use cases such as face biometric identification, fraud detection, suspicious behavior detection, and left luggage detection.

All of these require a high level of trust in the AI system. For this reason, having processes and mechanisms that help to explain the AI model decision-making process can build confidence in the system as well as an understanding of how and why it has reached an output.

Deploying explainable AI

Identifying potential use cases is one thing but building explainable methods for AI models to be deployed in the real world is always going to be a challenge. When an AI model goes from a testing scenario into the real world, there are always new factors that need to be taken into account. There are many examples of what can go wrong, particularly with black box technology, and this is why having tools and methods to explain the AI model can be very helpful. For an example of what can go wrong, Microsoft’s chatbot Tay famously ended up learning to repeat offensive material and conspiracy theories.2

“XAI is built on a deep understanding of the use cases, the data, and the risks and challenges. That’s the real key to success“
Dr. Susanne Kränkl
Director Innovations at Veridos

In the context of deploying explainable AI in a sensitive area such as border security, it is very important to make sure that the AI model is tailored to the specific context and trained to learn how to do its job as accurately as possible. However, it is not just the AI that needs to be trained, but also the humans who are working alongside it. They need to know why an AI system has produced a certain output, so that they can understand what they need to do with a referral. For example, it is no good flagging an individual as being of concern if the humans who need to step in at that point don’t know why the person is of concern.

Quality is key

AI models have greatly increased in reliability and have a reduced error rate in recent years. The benefits of deploying reliable technology will be felt in myriad ways, from more cost-effective and accurate visa processing through to faster border control checks. Taken together, different use case deployments of XAI for border control and security will lead to a step change in the experiences of both travelers and the border forces responsible for their security.

The successful deployment of artificial intelligence in any context critically depends on the quality of the model being deployed and ensuring that the model is tailored to the specific context in which it is being used. Great care must be taken to avoid unintended consequences.

  1. Laying Down Harmonised Rules on Artificial Intelligence, European Commission, April 2021

  2. Tay: Microsoft Offers Apology Over Racist Chatbot Fiasco, BBC, March 2016

Published: 11/01/2022

Share this article

Subscribe to our newsletter

Don’t miss out on the latest articles in G+D SPOTLIGHT: by subscribing to our newsletter, you’ll be kept up to date on latest trends, ideas, and technical innovations – straight to your inbox every month.

Please supply your details: