Cybersecurity failure could be among the greatest challenges confronting the world in the next decade, according to the World Economic Forum’s Global Risks Report 2021. As artificial intelligence (AI) becomes increasingly embedded worldwide, fresh questions arise about how to safeguard countries and systems against attacks.
To deal with the vulnerabilities of AI, engineers and developers need to evaluate existing security methods, develop new tools and strategies, and formulate technical guidelines and standards, said Arndt Von Twickel, Technical Officer at Germany’s Federal Office for Information Security (BSI), at a recent AI for Good webinar.
New vulnerabilities
So-called “connectionist AI” systems support safety-critical applications like autonomous driving, which is set to be allowed on United Kingdom roads this year. Despite reaching “superhuman” performance levels in complex tasks like manoeuvring a vehicle, AI systems can still make critical mistakes based on misunderstood inputs.
A major limiting factor on safe operation in the real world is the quality of the data used to train these systems.
High-quality data is expensive, as is the training required for huge neural networks. Existing data and pre-trained models are often obtained from external sources, but this can open connectionist AI systems to new risks.
Poison and noise
“Malicious” training data, introduced through a backdoor attack, can cause AI systems to generate incorrect outputs. In an autonomous driving system, a malicious dataset could incorrectly tag stop signs or speed limits.
Even a low percentage of “poison data” can yield dangerous results, lab experiments show.
Other attacks feed directly into the operating AI system. For example, meaningless “noise” could be added to all stop signs, causing a connectionist AI system to misclassify them.
“If an attack causes a system to output a speed limit of 100 instead of a stop sign, this could lead to serious safety issues in autonomous driving,” Van Twickel explained.
Tiny data variations can result in wrong decisions. Yet the “black box” nature of AI systems means they cannot provide clarity about why or how an outcome was reached. Image processing involves enormous input, with millions of parameters making it difficult for either end users or developers to interpret the outputs of a given system.
Defence mechanisms
How can AI engineers deal with adversarial backdoor attacks?
A first line of defence would be preventing attackers from accessing the system in the first place. But neural networks are transferable, meaning attackers can train AI systems on substitute models that teach malicious examples – even if the data is labelled correctly.
Procuring a representative dataset to detect and counter malicious examples can be difficult.
The best defence involves a combination of methods, including the certification of training data and processes, secure supply chains, continual evaluation, decision logic and standardization, Von Twickel noted at the 15 April webinar, part of the Trustworthy AI series of online talks hosted by ITU’s AI for Good platform.
This series features expert speakers discussing the toughest challenges facing current AI technology, and presents new research that aims to overcome limitations and develop certifiably trustworthy AI systems.
A need for AI education
More unknowns arise when it comes to auditing information and communication technology systems for safety and security. As Von Twickel asked webinar participants:
“How much communication do you need between the developer and user? How informed does the user need to be about the boundary conditions for the AI system to work as expected?”
There’s also the question of the acceptability of uncertainties, including the risk of system failures. Currently, connectionist AI systems can only be verified in limited cases and under specific conditions. The larger their task space becomes, the more difficult the system becomes to verify. Real-life tasks, with innumerable parameters, can be nearly impossible to verify.
For Von Twickle, a promising way forward involves more education and training both for AI developers and end users.
To learn more about cybersecurity challenges in artificial intelligence, register for the upcoming AI for Good keynote Malware and Machine Learning – A Match Made in Hell.
Image credit: Shutterstock