AI for Good stories

How can AI impact security? Key takeaways from an ITU Workshop

Artificial intelligence (AI) and machine learning have come a long way and are already being deployed by many of the world’s biggest information and telecommunication (ICT) companies to help combat the growing range of cyberattacks. AI still has a long way to go, however, before cybersecurity experts can rely on it too heavily.

Featured Image

Artificial intelligence (AI) and machine learning have come a long way and are already being deployed by many of the world’s biggest information and telecommunication (ICT) companies to help combat the growing range of cyberattacks.

AI still has a long way to go, however, before cybersecurity experts can rely on it too heavily. At the same time, attackers are using the same technologies to try to stay one step ahead. Meanwhile, global standardization efforts could help experts find interoperable solutions to better leverage AI and machine learning to secure vital information flows.

These were some of the conclusions at Monday’s ITU Workshop on Artificial Intelligence, Machine Learning and Security, held at ITU Headquarters in Geneva, Switzerland.

Top experts from some of the world’s biggest ICT companies, including Alibaba, China Unicom, Ericsson, IBM, KDDI, NICT, SK Telecom, Vodafone and ZTE – as well as top digital security firms such as Symantec, 360 Technology, and PAGO Networks – met with experts from academia and standard-making bodies to discuss some of the details and lessons learned in their efforts to deploy AI and machine learning.

Here are some of the top takeaways from the workshop.

1) AI and machine learning are helping improve cybersecurity efforts

“The detection of an APT [Advanced Persistent Threat] is somehow like finding a needle in a haystack,” said Tian Tian, APT Project Manager for the Chinese multinational telecom firm, ZTE, during a presentation in which she showed a model of how ZTE has deployed AI to find and limit threats.

“AI makes use of Big Data to enhance the efficiency and effectiveness of the model,” she said as she revealed to the audience how ZTE’s experiments to deploy deep learning AI had yielded an average accuracy improvement rate of 11.4% for threat intelligence.

The experiments also showed big improvements on detecting false positives, or false alarms, as well as false negatives, in which real attacks were not detected. The deep learning experiments led to an average false positive improvement rate of 93.7% and an average false negative improvement rate of 62.3%, said Tian.

“Even the best image recognition software can’t tell a muffin from a Chihuahua.” – Mikko Karikytö, Ericsson

Many presenters also showed how AI and machine learning helped them detect and thwart everything from malware attacks to Distributed Denial-of-Service attacks, which flood or overload systems with traffic, temporarily shutting down operations.

“AI enables end-to-end security between operators … to defend against potential DDoS (Distributed Denial of Service) attacks,” said Feng Gao, Vice Research Director for China Unicom, giving just one example of many during her presentation.

Chunbao Chen, Senior Risk Strategy Specialist for Alibaba, detailed how the e-commerce giant is using its AI-powered ‘AlphaRisk’ risk control engine across the company, as well as affiliated companies, such as AntFinancial and its mobile payments business, Alipay.

The early results for AliPay’s deployment of AlphaRisk, according to Chen, were that the fraud rate went down 84% and the disruption rate went down 71%.

2) There is potential to do much more

We still have a long way to go, however, before AI can be relied upon to produce trusted results in key areas.

“Even the best image recognition software can’t tell a muffin from a Chihuahua,” said Ericsson’s Karikytö, adding that AI-driven image recognition software “can be fooled into seeing something that they don’t see. This is very interesting from a threat actors’ perspective.”

He pointed out that threat actors are using the same AI and machine learning technologies, but could have an advantage, because they don’t have to worry about incident response (IR) policies or laws. “Every new power for good brings an equal power for evil,” said Karikytö.

Young Mok Kwon of the PAGO Networks security provider in Korea said that they still need to improve accuracy rates and that they need lower false positive rates and false negative rates before rolling out machine learning products to enterprise users. He stressed that machine learning deployments should still have a review and investigation phase by end users or specialized managed services.

“Dataflow-based IoT threat detection will be a very attractive research area. There is high value at stake to look at this solution.” said Ericsson’s Karikytö. “From an incident response perspective, the instances that cost us a lot of time or money or embarrassment are due to simple human mistakes. These can be avoided if automated.”

3) Human experts will not be replaced by AI anytime soon

“AI and machine learning provide a helping hand … but we should not see that these technologies are a silver bullet for protecting us,” says Karikytö. “The learning curve is still steep. There is still a lot of need for on-hand experts and all that has been learned in the past 20 years.”

Indeed, the “main contribution of AI is to lighten the load of specialists,” says ZTE’s Tian.

You will still need a lot of human labor to go through false positives, for instance, pointed out Karikytö, adding that AI may not be replacing cybersecurity jobs.

“We always say that with machine learning, we can reduce human resources, but it’s not true with security,” said Mr Kwon of PAGO Networks. Machine learning provides a scoring base for the level of threat, he said, but that is just a starting point for further analysis by human experts – at least for now.

Humans will still do all the “creative stuff” said Alibaba’s Chen, such as defining the problem, digitalizing the data, acquiring domain knowledge and doing causal inference.

4) Standardization efforts can help

Are we ready to standardize anything in AI? Some participants wondered if it was too early, but most recognized the need.

“We have many closed research systems,” said Andrew Gartner, Founder and Leader of the Center for Advanced Machine Learning (CAML) at Symantec cybersecurity company. “Because of competition, data is proprietary. We are having a hard time finding synergy in the industry.”

“The need for standardization is there,” said Neil Sahota, Master Inventor and World Wide Business Development Leader at IBM. “These technologies can be used anywhere. We cannot regionalize it…. We need some standardization. We need best practices and ethical standards to use this technology.”

The workshop concluded with a session on how standardization efforts can help improve AI and machine learning for security. The session identified future directions that Study Group 17 (SG17) of ITU’s Standardization Sector (ITU-T) needs to study to: identify standards gaps on various specific security and privacy controls to address identified threats and risks; to suggest potential ways forward to develop technical Recommendations to fill those gaps; and to identify stakeholders with whom SG17 will collaborate in the future.

For more information on the workshop, see the workshop’s programme where you can find biographies of the speakers as well as their presentations.

The new emerging AI-driven cyber threats and the dark side of AI will be further explored at the AI for Good Summit this Spring.

Are you sure you want to remove this speaker?