Generative AI: Powerful Tool, Serious Security Risks

Generative AI: Powerful Tool, Serious Security Risks

Organizations are adopting generative AI tools to automate tasks and gain new insight into data. Gen AI has the potential to transform operational processes and enable unprecedented levels of efficiency and innovation. However, gen AI also creates new security threats.

Gen AI needs data — and lots of it — to perform its magic. Without the proper controls, organizations could expose sensitive information and intellectual property.

Malicious actors are also using gen AI to become more efficient and effective. Gen AI tools can be used to create sophisticated phishing campaigns and adaptive malware that are difficult to detect. These tools also give attackers more insight into the systems and networks they want to infiltrate.

Data Leakage and Exposure

ChatGPT is perhaps the best-known gen AI tool. It can answer open-ended questions, solve math problems and summarize content. It can also generate unique text, images, charts, tables and software code based on user prompts.

Microsoft has also introduced a gen AI tool called Microsoft 365 Copilot, based on GPT-4. Copilot is designed to enhance Microsoft productivity tools by searching for data and creating documents. It can draft emails and Word docs, develop Microsoft Excel formulas, create PowerPoint presentations, and more.

These tools increase productivity by eliminating many time-consuming manual tasks. However, users can enter any type of data into a gen AI tool, including sensitive or proprietary information. Entering this data on a web- or app-based AI tool creates the risks of leakage or exposure.

Copilot goes further, searching all the data stored in the organization’s Microsoft environment for information relevant to the user’s prompt. It then uses this data to draft documents, regardless of whether it’s sensitive or proprietary information. This information can then get into the hands of unauthorized individuals.

More Effective Cyberattacks

ChatGPT is enabling unsophisticated hackers to create more sophisticated phishing attacks at an unprecedented scale. Phishing emails generated by ChatGPT lack many of the common indicators of these attacks, such as poor grammar and syntax. Additionally, ChatGPT allows hackers to automate the process, generating well-written, personalized emails and automatically responding if the recipient takes the bait. These phishing campaigns are extremely difficult for users to spot.

ChatGPT can also create malware that adapts almost instantly to an organization’s IT environment. It can analyze software code rapidly, modifying the malware as needed to infect as many systems as possible. The malware can also determine where sensitive data is stored and exfiltrate it automatically. In doing so, it can imitate the patterns of legitimate applications so that security tools and IT teams don’t detect anything unusual.

Combating Gen AI Security Threats

Addressing the security threats associated with gen AI requires a multipronged approach. To prevent data leakage and exposure, organizations must classify and tag sensitive information so that only authorized users can access it. Policies are needed to ensure that employees use gen AI appropriately and that AI companies don’t use business data without express consent. This can be accomplished using the built-in e-discovery tools in Microsoft 365.

To detect AI-generated phishing emails, organizations need tools that analyze device and user behavior to detect compromised accounts. These tools can flag emails from accounts behaving oddly or being accessed from suspicious locations. This will alert recipients that the email may not be legitimate even though it appears to be from a legitimate account.

Zero trust principles can help organizations combat adaptive malware by presuming that applications are malicious until authenticated and validated. Zero-day detection, application whitelisting and data classification should also be used to identify these threats and prevent data exfiltration.

The security experts at Verteks are staying abreast of AI-enabled security threats and building an arsenal of tools to protect your environment. Give us a call to discuss your plans for gen AI so we can help you get the right controls in place.


Just released our free eBook, 20 Signs That Your Business is Ready for Managed ServicesDownload
+