A ghostly hand is seen within a complex network of computer circuitry in an image about hackers, risk, and the downside of technology
#Tech Innovation

Maximizing cybersecurity in artificial intelligence and machine learning

Insights
6 Mins.

Artificial intelligence and machine learning are shaping our work and life. As their influence grows, it’s imperative that steps are taken to maximize cybersecurity – safeguarding data privacy while ensuring transparency.

Machine learning and AI-powered technologies are implemented to improve operational efficiency (via real-time analytics, for example) in the manufacturing industry, or used in product innovation, such as self-driving cars. In medical diagnostics, artificial intelligence and machine learning help improve the accuracy of diagnoses. In digital banking, they are used to determine fraudulent transactions for financial institutions. Barely any domain has been left untouched by artificial intelligence and machine learning, and the benefits speak for themselves.

The technologies make it possible to easily identify patterns and trends, and companies can analyze this data and generate higher ROIs without human intervention. In the long run, machine learning allows for continuous improvement and efficiency gains. Yet there are potential disadvantages to its implementation. For example, there is a high error susceptibility, which could have a widespread impact: if one mistake is made, it can set off a chain of errors that can go undetected. Reducing this error susceptibility is key to achieving success.

What does the increased use of artificial intelligence and machine learning mean for us? Obviously, the more these machines take over, the more influence they have, and their effects can be felt across society. Social media platforms use algorithms to track preferences, using machine learning to deliver content in line with user tastes. The results can be amplified: studies have proven that radical statements are picked up more easily by algorithms, creating an echo chamber that reinforces existing attitudes.1 A 2021 report also confirmed that YouTube’s “recommended videos” algorithm had been promoting divisive content and thus contributing to societal polarization.2 Besides ethical issues, security is being threatened by attacks on ML systems, for example camera hijacks on face recognition systems or the misconfiguration of AI source code repositories to access internal data.3

The responsible handling of these relatively nascent technologies is imperative. This can be achieved, in part, by implementing cybersecurity solutions and threat detection measures to reduce the risk of adversarial attacks.

Image of a young woman studying a see through computer screen & contemplating

What are adversarial attacks in machine learning?

Every type of software has its own vulnerability. With developments over time, new cyberthreats transpire. And as deep learning becomes an integral part of many applications, the risk of cyberattacks in artificial intelligence models increases.

There are risks during different phases of ML. If the input is incorrect, the learning system’s results will be inaccurate. And it’s entirely possible to manipulate the data to achieve deliberately skewed results. Such was the case of Microsoft’s chatbot Tay, which became racist after being deliberately fed offensive information in a coordinated attack.4

Results can also be the targets of attacks. In such a case, the focus is not on manipulating the learned model, but on changing queries until the desired result is achieved. It’s also possible for attackers to create a functional copy of the model, and analysis of an offline copy can lead to an attack on an AI system, compromising personal data. Yet a personal data breach is not the only possible consequence. It is also possible for attackers to prepare adversarial examples that create false predictions.

Defending against cyberthreats and data breaches

It goes without saying that conventional data encryption and data integrity protection, alongside secure data storages and computing devices (see products and solutions below), are a must-have to reduce vulnerability. Comprehensive cybersecurity measures must be kept up to date to protect systems from growing threats. But specific AI protection measures must also be implemented to minimize risk, for example by expanding training data to include adversarial examples.

These developments are underway as experts come to realize their importance. Awareness in this field is high, and the European Union Agency for Cybersecurity published a comprehensive report in 2021 presenting an analysis of threats targeting machine learning systems.5 Much work is being undertaken to enhance cybersecurity in machine learning. Learning systems are being hardened and there is a recognition of datasets that show anomalies. Robustness of ML models is enhanced and biases in data sets or model predictions are detected. High use for model extraction and adversarial example attacks are identified by monitoring systems, and attackers are slowed down automatically.

“AI systems require specific protective measures for risk mitigation, such as the expansion of training data to include adversarial data examples“
Dirk Wacker
Director of Technology and Innovation in Corporate Technology Office, G+D

How, what, and XAI? Security and data protection

XAI is becoming an important and timely topic as we review whether AI models have been thoroughly tested and make sense. Increasing transparency is key to knowing whether we can trust AI and ML results. At the same time, protection of personal data and sensitive information is of paramount importance.

Dirk Wacker, Director of Technology and Innovation at G+D, explains it this way: “Differential privacy is the mathematical definition for the protection of personal data. It refers to a set of techniques that prevents the leakage of sensitive data. It’s a way of gleaning useful information without divulging personal information. Another way to enhance data protection is via federated learning/collaborative learning.” Wacker continues, “Trained models are shared with partners to compile a joint model – but data is kept locally, thus ensuring maximum security.”

In an IoT scenario, connected SIM cards can be used to achieve higher levels of protection. Doing so ensures the secure transfer of information. And finally, one other way of enhancing security is via the protection of endpoints, ensuring that only authorized users can send requests to the models.

In summary, protecting systems against adversarial attacks requires a range of efforts. Those responsible must ensure the use of reliable input data, algorithms, and models. At the same time, this data must be protected from all angles via the use of a variety of measures. Dirk Wacker concludes, “productivity and efficiency are key buzzwords with regard to AI, but security and transparency are just as important!”

  1. Are Algorithms a Threat to Democracy? The Rise of Intermediaries: A Challenge for Public Discourse, Algorithm Watch, 2020

  2. YouTube Regrets: A crowdsourced investigation into YouTube’s recommendation algorithm, Mozilla Foundation, 2021

  3. Case Studies Page, Github, 2020

  4. Tay: Microsoft issues apology over racist chatbot fiasco, BBC, 2016

  5. Securing Machine Learning Algorithms, ENISA, 2021

Published: 08/03/2022

Share this article

Listen to our G+D articles

On the go? We've made it easier for you to access our articles, wherever you are.
Explore our audio articles