Trend 2: AI strategies for preserving privacy
Data is essential for AI. Going forward, organizations of all kinds will need access to sufficiently large quantities of high-quality data to ensure the AI systems they deploy on live data can be trusted when they are tasked with recognizing patterns, making predictions, taking decisions, and so on.
This focus on data is only one step toward creating such “responsible AI.” There also needs to be a clear understanding of how algorithms will behave in the field before they are deployed.
There are already many well-documented instances where AI algorithms have shown highly negative biases, in particular against different ethnic groups and genders. There are also numerous cases of companies engaging in non-ethical data gathering activities where individuals don’t fully realize where their data might be used (for example, when they sign up to a website).
One of the major responses to such concerns is the European Union’s Artificial Intelligence Act, which proposes to regulate AI and establish a legal framework that classifies AI systems by their risk level. The Act, which is expected to be adopted in 2024, mandates various requirements for AI development and use, including prohibiting certain use cases, such as surveillance or social scoring of people.1
“The Act is all about creating trustworthy AI,” says Bordoli. The big benefit for both industry and society is that by tightening the rules, AI systems will become more trusted, she adds. And the EU is not alone in its thinking: Canada and the UK, among others, are also developing regulatory frameworks for AI use.
As a leader in identity solutions, Veridos places a high priority on trustworthy AI and collaborates widely in this area with both academic and industry partners. “Trustworthy AI is a cornerstone of our work. We aim to ensure that our AI systems function in a manner that is fair, transparent, and respects human autonomy,” says Bordoli.
To address bias, Veridos invests in the continuous training of its AI models and the diversification of training data. It also employs rigorous testing methodologies to identify and rectify any inherent bias before a system is deployed.
At the same time, the company has a partnership with the renowned University of Erlangen-Nuremberg to research state-of-the-art methodologies for trustworthy AI and to develop a holistic implementation across the complete AI life cycle. With that in mind, the partnership has identified more than 40 quantitative and qualitative metrics to assess the trustworthiness of AI systems.
Veridos privacy-preserving AI solutions are designed to both protect user data and ensure data sovereignty. A key technology it leverages in that area is Federated Learning. This innovative approach to machine learning allows an algorithm to be trained across multiple decentralized devices or servers, each holding a local dataset, without the data itself being exchanged. In essence, Federated Learning ensures that data remains in its original location, thereby preserving data sovereignty and minimizing the risk of privacy breaches.
For example, using a federated learning system, a healthcare provider could make patient data available to help train a new AI-based cancer diagnostics machine, while ensuring the absolute privacy of that patient data.