YOU ARE AT:AI-Machine-LearningChatGPT and cybersecurity: Data breaches, system vulnerabilities and malicious use (Reader Forum)

ChatGPT and cybersecurity: Data breaches, system vulnerabilities and malicious use (Reader Forum)

When ChatGPT first became available in the fall of 2022, most people were in awe of the advances generative AI technology had made, as well as inspired by its promise of increased efficiency and productivity — especially for businesses. However, such technologies also involve inherent risks.

The good news is that there are ways to protect organizations from these vulnerabilities while still reaping the benefits of these innovative technologies. Below, I explain how.

Cybersecurity risks of using generative AI

Recent research shows that about 4.2% of ChatGPT users think it’s okay to input sensitive data into this Large Language Model (LLM). In the same article, it is explained that “in one case, an executive cut and pasted the firm’s 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient’s name and their medical condition and asked ChatGPT to craft a letter to the patient’s insurance company.”

One of the biggest risks of using generative AI, therefore, is that people might choose to employ it irresponsibly. If an employee feeds an LLM confidential information, a different user may be able to retrieve it at a later date. Staff members who are ignorant of how ChatGPT works could leak a company’s proprietary data, thereby reducing or even eliminating that company’s competitive advantage.

In addition, cybersecurity research firm HYAS recently proved that ChatGPT can be used to create polymorphic malware that can bypass even top-tier Endpoint Detection and Threat Response (EDTR) products. Polymorphic malware is different from normal malware because it can modify itself on the fly, evading static signature detection. In short, polymorphic malware infiltrates systems where it then operates imperceptibly.

ChatGPT’s shockingly accurate ability to debug code and find programming errors also enables attackers to find security vulnerabilities. Although ChatGPT has been trained not to provide illegal hacking services, researchers have proved that those protections can be thwarted. One simply needs to tell the LLM that you’re a researcher doing a “Capture the Flag” (CTF) computer-security competition or other benign activity, and it will help you find bugs and even write code on how to exploit them. In another well-known current bypass, when a person uses the ChatGPT  Application Program Interface (API) directly, many of the safeguards on the front end are absent.

Only after people abuse the technology and issues emerge does the team behind ChatGPT intervene to patch things up, although they do move to fix problems faster than many other technology companies.

How to protect your business

Now that HYAS has shown that ChatGPT can write fully undetectable malware, cybersecurity professionals need to shift their focus to detection, not prevention. Toward that end, the human behind the computer has never been more important to a company’s defensive posture. Since EDTR systems are likely to fail, successfully identifying system penetration will increasingly require human beings’ sense that something is “off.”

As such, the most important step a business can take to protect itself is training its employees to spot anomalous traffic on a network. Users should be taught to monitor for irregularities, such as unintended mouse movement, high Central Processing Unit (CPU) consumption, and webcams enabling themselves, among other things. Simple slogans like “see something, say something” can go a long way.

Of course, users also need to be taught never to feed HIPAA-protected information, client data, source code, or proprietary information into a generative AI.

Finally, just because ChatGPT is ushering in a new wave of cyberattacks doesn’t mean companies can forget about timeless threats like social engineering and phishing. Employees still need to be trained to spot and report malicious emails, texts, and other communications.

A new era of cybersecurity

ChatGPT’s capabilities already give hackers powerful tools, and its sophistication will improve over time. Host-based defenses like EDTR systems will only protect you up to a point. Now that prevention is becoming more challenging, the ability of informed staff to detect malicious behavior has never been more important.

ABOUT AUTHOR