November 10, 2018

Cybercriminals Using AI to fuel Hacking

Click to download PDF
In 2016, ZeroFOX took to Twitter with an automated spear phishing attack to determine if machines were superior to human hackers. During the research, ZeroFOX scientists trained an AI system to study Twitter users’ behavior before designing a phishing bait. To the surprise of many, the experiment revealed that machines were more effective at tricking users to click on dangerous links than human attacks. The AI hacker competently distributed more phishing tweets as compared to a human hacker, resulting in a higher conversion rate.
“I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself.” Professor Stephen Hawking.

What’s the fuss about AI? Notably, this new technology includes models developed with Deep Neural Networks which are similar to the human brain neurons. In effect, DNNs are used to make an electronic device to mimic human behaviors, in particular, the processes of decision-making, problem-solving, and reasoning.

AI apparently relies on data, which is an advantage for the technology as more information is produced today. At the same time, despicable users are efficiently inventing new attacks by leveraging AI, a trend that will make attack mitigation even more perplexing. Hackers will find ways to widen their adoption of machine learning to design new attacks and to enhance their skills to discover and disrupt measures deployed by businesses and security experts.

Due to its capabilities, hackers can deploy AI to automatically monitor emails and text messages while creating personalized phishing emails to increase the effectiveness of social engineering attacks.

Additionally, cybercriminals can use AI to mutate ransomware and malware easily by creating multiple variations of an attack. In the process, a hacker will be able to intelligently search, discover, and exploit vulnerabilities in systems while avoiding detection. A good example is the highly evasive and targeted AI-based malware DeepLocker.

AI will make attacks more affordable. Currently, numerous advanced machine and deep learning techniques are readily available on open source platforms. At the same time, there is the availability of cheap IT infrastructure that allows the launch of sophisticated breaches. An AI system can easily be created and deployed to perform sturdy functions that were initially impossible for human brain power. In this case, hackers will use an AI system to launch malicious attacks with inordinate speed and depth. AI bots can be deployed to mine mounds of information on social networks and public domain to steal crucial personal identifiable information or carry out advanced and coordinated attacks against systems.

Clearly, availability of AI can lead to the easy weaponization of the technology by cybercriminals, and the ZeroFOX experiment might be an indication that hackers are probably using this attack approach. Hackers can now launch attacks that have multiple variations, payload, and increased volume, leading to superfast and highly scalable data breaches.

Unfortunately, the world cannot slow down the innovation process or lock down all sensitive information assets. Therefore, firms should deploy robust measures capable of preventing neural networks from weaponizing AI. In case such adversarial AI attacks are detected, it is vital to share intelligence information with relevant stakeholders to guide in the creation of appropriate mitigation solutions.