Automated Attacks

Automated Attacks

Malicious artificial intelligence applications are creating dangerous new cybersecurity threats.

Cybersecurity analysts warn that the malicious use of artificial intelligence (AI) and machine learning (ML) will bring fundamental changes to the threat landscape in the next few years. Criminals are already using the technology to create powerful new threats while also enhancing existing attacks.

While AI and ML are being used to augment security in many ways, analysts with the research firm Forrester say cybercriminals are adopting these technologies much faster than IT organizations. The unfortunate reality is that highly profitable cybercrime organizations have more money, expertise and motivation to weaponize these technologies.

One concern is the spread of deep fakes — audio and video altered by algorithms to make them appear real. Deeptrace, an Amsterdam-based cybersecurity firm, recently reported that online deep-fake videos have increased by 84 percent in less than a year. In its “Predictions 2020: Cybersecurity” report, Forrester says deep fakes are likely to cost businesses more than $250 million in 2020.

“The authenticity of images and video can no longer be taken at face value, as artificial intelligence and other technologies allow for instant generation of complex fakes,” said Lars Buttler, CEO of the AI Foundation, a nonprofit organization that promotes commercial and social AI products. “As a result of the proliferation of fakes, human agency and free thinking are at risk.”

Enhanced Phishing, Social Engineering

The use of AI and ML to enhance a variety of existing threats may be of more immediate concern to most organizations. A recent report authored by 26 experts on the security implications of emerging technologies notes that AI and ML make a variety of attacks more automated, scalable and cost-effective than ever before. The authors, representing organizations such as Oxford University's Future of Humanity Institute, the Center for New American Security and the Electronic Frontier Foundation, say these attacks will be “especially effective, finely targeted and difficult to attribute.”

In their report, titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” the authors say AI will be particularly effective in automating social engineering attacks, phishing attacks and advanced persistent threats (APTs). AI-powered automation can increase the scale and frequency of these attacks and make it possible for low-skill individuals to launch highly sophisticated attacks.

AI makes it easy for cybercriminals to rapidly find and collect a wealth of online information about potential victims, including personal information, contacts and favorite sites. Using natural language processing and text analysis tools, they can then mimic the look, feel and writing style of these resources in order to automatically generate malicious websites, emails and links that are likely to trick victims.

The experts also warn that criminals can create convincing chatbots or video chats to masquerade as trusted contacts for highly effective spear-phishing attacks. The first-known instance of such an attack occurred earlier this year when criminals used AI software to imitate the voice of an energy company CEO and trick an employee into transferring nearly $250,000 into a secret account.

Researchers at ZeroFox have demonstrated that a fully automated spear-phishing system could create tailored Twitter tweets based on a user’s demonstrated interests, achieving a high rate of clicks to malicious links. They believe Russian hackers used similar automation in a 2017 attack in which malware-infected tweets were sent to more than 10,000 Twitter users in the U.S. Department of Defense.

Hacking at Speed and Scale

AI tools can also automate conventional brute-force hacking. In a recent experiment, researchers set up a honeypot — a server for a fake online financial firm — and exposed usernames and passwords in a dark web market. As researchers monitored the fake site, a single automated bot broke in, scanned the network, collected credentials, siphoned off data and created new user accounts so attackers could gain access later. The bot accomplished all of this in only 15 seconds.

Programs such as AutoSploit automate the process of finding new vulnerabilities. They can rapidly analyze patterns of previous breaches, search for Internet-connected devices, conduct penetration tests and then automatically execute exploits when targets are identified. Other tools are used to improve target selection and prioritization by analyzing information scraped from company websites, social media, news platforms and other publicly available sources. This can be used to determine a potential target’s income, health status, family relationships, business connections and more.

Malicious AI is also used to automate a process known as “fuzzing” in which hackers inject invalid data into the user-facing front end of target program until it triggers a crash, thus revealing potential vulnerabilities. These then become points of entry for APTs that remain undetected for extended periods, moving laterally throughout the network harvesting credentials and sensitive data. Eventually, this data is exfiltrated to a command-and-control server.

AI and ML offer exciting opportunities for business efficiency by giving machines the ability to analyze massive data sets, identify patterns and make autonomous decisions. Naturally, cybercriminals are equally excited about weaponizing these technologies. As security vendors develop countermeasures for these emerging attacks, organizations must remain vigilant and shore up any potential vulnerabilities using best-practice security techniques.


Just released our free eBook, 20 Signs That Your Business is Ready for Managed ServicesDownload
+