loader image

It’s hard to read any technology blog or article today without seeing a reference to some Large Language Model (LLM) like GPT4 or Google Bard. But what do these LLMs mean for an enterprise and are they secure? Are they practical for everyday use cases or is this something that is used for complex computations? I decided to test out ChatGPT and see what it could do.

A quick internet search for “ChatGPT” was all it took to find the site. Setting up an account was easy enough, I navigated to the website, clicked the “Sign Up” button, and entered my information. After setting up an account I was presented with a text box and blinking cursor. I found myself wondering “what do I ask the all-knowing AI chatbot?” It was about 4:30pm and my family was asking about dinner. Having nothing prepared and the power of AI at my fingertips I decided to give the chatbot a list of items in my fridge and pantry and asked it to create a dinner for four people in under an hour, with recipes. That night we ate well. The chatbot suggested I make chicken parmesan with noodles, salad, and roasted broccoli. I found it amazingly convenient to be able to use this technology to come up with new dinner ideas from just the ingredients I had on hand. While this was a fun exercise into how I can utilize an AI chatbot, it got me thinking, what are the real-world cybersecurity use cases for Artificial Intelligence chatbots and how are the bad guys leveraging it already?

Real-World Cybersecurity Use Cases for AI Chatbots
AI Chatbots can handle a wide range of security-related queries and concerns, including password resets, account balance checks, and other routine inquiries. They can also detect and alert security teams to potential threats, such as unusual activity or attempted logins from unfamiliar devices.

In addition, AI Chatbots can automate security-related tasks, such as patching and updating software, freeing up security teams to focus on more complex issues. They can also help improve the customer experience by providing personalized responses and 24/7 availability.

AI Chatbots like GPT-4 and GitHub Copilot are being used to write and audit code, for phishing and scam detection, threat intelligence, incident response, and automation. For an enterprise, the possibilities are truly endless, but those possibilities come with risk.

Security and Privacy Concerns
Chatbots may collect sensitive personal information from its users. If this data falls into the wrong hands, it could be used for identity theft, fraud, or other malicious purposes. Enterprises must also use extreme caution when interacting with hosted AI chatbots. Remember, anything you share with ChatGPT is retained and used to further train their models. For example, recent reporting indicates that engineers are using ChatGPT to assist in identifying solutions to malfunctioning databases. Others are entering meeting notes to generate meeting minutes more efficiently. This sensitive information is now captured by ChatGPT and is being used to train the LLMs. The data could potentially be shown to anyone who gives the correct prompt asking for the type of data freely entered by these employees.

Criminals Utilizing AI for Illicit Acts
As AI technology continues to advance, cyber criminals are also finding new ways to exploit it for their own gain. For example, they may use sophisticated AI algorithms, like GPT-4, to craft convincing phishing emails or generate fake social media accounts for the purpose of social engineering attacks.

Criminals may also use AI to automate attacks, such as brute-force attacks on passwords or automated scans for vulnerabilities in software. These attacks can be carried out at scale, allowing criminals to target many victims simultaneously.

Beyond phishing and password cracking AI chatbots have begun to show up in extortion and “kidnapping” cases. In April of 2023 an Arizona mother reported to authorities that she received a call indicating her daughter was kidnapped while on a ski trip with friends. She told authorities she could hear her daughter’s voice crying and talking and a man demanding one million US dollars. After calling authorities the mother was able to confirm her daughter’s safety and only then realized the entire incident was a scam powered by AI. AI voice technology has evolved to the point that a person’s tone and manner of speaking can be copied from only a few captured words. A person’s outgoing voicemail message is enough to allow an attacker to steal your voice and potentially use it for illicit means.

Protect Your Infrastructure
While AI Powered chatbots offer many benefits, such as improved threat detection and response times, they also come with potential privacy and security risks. As organizations continue to adopt chatbots into their security strategies, it is crucial to establish guidelines and accountability in their implementation, as well as ensure that privacy and security protocols are in place to safeguard sensitive data. By doing so, we can utilize the power of AI to strengthen cybersecurity defenses while ensuring measures are in place to better protect against an ever-evolving threat landscape.