3D visualization of a head consisting of bright dots and lines, the brain is represented by additional lines
#Digital Infrastructures

Why it’s time for AI regulation in Europe

Global Trends
6 Mins.

Should artificial intelligence (AI) be regulated? The EU Parliament and EU Commission think so, and are putting forward proposals for a coordinated European approach to the ethics, excellence, and trustworthiness behind the new technology. Companies like G+D, with its core values of security, transparency, trust, and technology, support these proposals, believing that  a balance must be found between managing risk and supporting technological innovation.

On 29 May 2020, the Australian government was forced to pay over €430 million to low-income Australians whom it had wrongly accused of receiving inflated welfare benefits.

Four years earlier, in what was to became known as the “Robodebt” scandal, an AI system set up by Services Australia had used an algorithm to cross-reference the self-reported income of individuals against their estimated income calculated by the algorithm. Where the algorithm identified a discrepancy, an autogenerated debt notice was sent to the individual, without any human check. Unfortunately, hundreds of thousands of these debt notices had been incorrectly calculated, resulting in distress for the individuals concerned, as well as a costly outcome for the government.

Not only did the Robodebt system breach existing laws, but it infringed on key principles that should firmly lie at the heart of AI. The accused individuals had no idea that an algorithm was being used to assess them. Neither did they know what data it had used to reach its decision, how it had gathered the data, or which criteria were applied in the analysis. In other words, people were being forced to prove their innocence without access to any of the data that led to the algorithm’s decision.

Striking the right balance: managing risk and supporting innovation

Time and again, we see that algorithms aren’t yet able to match the complexity of the human situation. In Europe, data protection law already states that important decisions made by algorithms must be reviewed by humans, and requires the availability of an explanation behind the algorithm’s logic. But we need to go a step further. As outlined in the white paper published by the EU in February 2020, there’s now an urgent need for solid, consistent guidelines for the solutions and services that use AI technology to benefit society, such as machine learning (ML), robotics, algorithms, and advanced distribution management systems (ADMS). 

A crucial part of the development and deployment of AI systems is the adoption of a human-centric approach. There needs to be a balance between adherence to the rules and the necessity for experimental freedom, competitiveness, and innovation. Security guidelines, product certification, and best practices must form a strong framework for regulation that ensures transparency, quality, and fairness, while granting the freedom to innovate, for all people. Meanwhile, human developers need to carefully monitor systems and check for errors at the initial stages. And this must happen right across the EU: only then will society accept AI as a complement to our human selves, and recognize its advantages.

As a company that offers security solutions in regulated markets such as automotive, banking, border control, and ID documents, G+D supports the European Commission’s call for AI regulation. G+D already ensures that its AI solutions have equally high levels of security and trustworthiness as all their other security technology products. However, the company believes that we need a solid framework for legislation to ensure that the fundamental rights of all citizens of the EU are protected.

“Progressing digitalization implicates processing amounts of data by applying AI. However, there is no reliable AI solution without trusted resources such as input data, algorithms, and models as well as infrastructures“
Dr. Michael Tagscherer
Corporate Technology Officer at G+D
3D model of a hammer for court hearings
A strong framework for regulation must ensure transparency, quality, and fairness, while granting the freedom to innovate

Of course, varying levels of regulation would be necessary: where AI is used to improve product quality, less is required; where AI has been developed for surveillance, for example, or where access is required to high-security events, there’s a need for firm regulation and certification. A clear vision, a united mission, and reliable regulation in high-risk areas are the vital components of a framework to take AI into the future. At the same time, in low-risk areas, a voluntary European labeling scheme should be created that allows potential users – citizens, businesses, and public administrators – to ascertain whether applications are based on secure, responsible, ethical AI.

Transparency, quality, and fairness

The decision-making of algorithms has enormous potential for good. Where systemic racism is deeply rooted in society, or facial recognition technology is used to rank the trustworthiness of citizens, mistrust will surely grow amongst the population. A transparent approach is vital: it’s only with public cooperation with the private sector that the foundation of trust in AI can be built. AI has already proved itself useful – most recently in 2020, by identifying those at risk of contracting COVID-19 during the pandemic. Algorithms can take multiple outcomes into consideration in split seconds, providing medical research results and diagnoses that save human lives.

“Our approach combines the capabilities of AI ideally to meet privacy and societal requirements and allows for innovation strengths and disruptive business models“
Marian Gläser
Co-founder/CEO, brighter AI

Fairer regulation will go a long way towards shaping a more secure future. G+D believes that governments and organizations must seize this very real chance that AI offers to improve social conditions.

For example, the Berlin-based tech startup brighter AI uses AI to anonymize data to protect human identities in public camera videos. IDnow uses AI to provide a high degree of security during customer identity verification. Establishing a trustworthy AI would not only protect existing public values and fundamental rights; it could also see European businesses gaining a competitive advantage in a market currently dominated by China and the US, differentiating themselves from global competitors by offering an alternative that’s proven to be reliable and fair.

“At Veridos we see vast potential for machine learning applications within the areas of smart border control and security, as well as verification of documents, persons, and identities. All these topics require a high level of trustworthiness and privacy and ethical awareness, which are essential pillars of the Veridos AI vision“
Silke Bargstädt-Franke
Head of Product Management, Veridos

To achieve this, the regulations and guidelines that uphold the trustworthiness of organizations and their products, while encouraging the freedom to innovate, urgently need to be standardized throughout Europe and internationally. Development must be both encouraged and actively shaped so that it moves in the right direction.

AI solutions need to be focused not only on efficiency, but also on the associated impact on the environment and on the social costs to private individuals – as made manifestly obvious by Robodebt. We can, and must, shape a future that looks to enrich our human skill sets, using the power of AI to shape our future and improve our society and way of life.

Published: 24/11/2020

Share this article

Subscribe to our newsletter

Don’t miss out on the latest articles in G+D SPOTLIGHT: by subscribing to our newsletter, you’ll be kept up to date on latest trends, ideas, and technical innovations – straight to your inbox every month.

Please supply your details: