It’s hard to read any technology blog or article today without seeing a reference to some Large Language Model (LLM) like GPT4 or Google Bard. But what do these LLMs mean for an enterprise and cybersecurity implications? Are they practical for everyday use cases or is this something that is used for complex computations? I decided to test out ChatGPT and see what it could do and where cybersecurity implications are.

It only took a quick internet search for ‘ChatGPT’ to find the site. Setting up an account was easy enough, I navigated to the website, clicked the “Sign Up” button, and entered my information. After setting up an account, the website presented me with a text box and blinking cursor. I found myself wondering “what do I ask the all-knowing AI chatbot?” It was about 4:30pm and my family was asking about dinner. Having nothing prepared and the power of AI at my fingertips I decided to give the chatbot a list of items in my fridge and pantry and asked it to create a dinner for four people in under an hour, with recipes.

That night we ate well. The chatbot suggested I make chicken parmesan with noodles, salad, and roasted broccoli. I found it amazingly convenient to be able to use this technology to come up with new dinner ideas from just the ingredients I had on hand. While this was a fun exercise into how I can utilize an AI chatbot, it made me think about the real-world cybersecurity use cases for Artificial Intelligence chatbots and how bad guys are already leveraging it?

Real-World Cybersecurity Use Cases for AI Chatbots

AI Chatbots can handle a wide range of security-related queries and concerns, including resetting passwords, checking account balances, and other routine inquiries. They can also detect and alert security teams to potential threats, such as unusual activity or attempted logins from unfamiliar devices.

In addition, AI Chatbots can automate security-related tasks, such as patching and updating software, freeing up security teams to focus on more complex issues. They can also help improve the customer experience by providing personalized responses and 24/7 availability.

AI Chatbots like GPT-4 and GitHub Copilot are being used to write and audit code, for phishing and scam detection, threat intelligence, incident response, and automation. For an enterprise, the possibilities are truly endless, but those possibilities come with risk.

Cybersecurity Implications and Privacy Concerns

Chatbots may gather sensitive personal information from users. If this data falls into the wrong hands, it could be used for identity theft, fraud, or other malicious purposes. Enterprises must also use extreme caution when interacting with hosted AI chatbots. Remember, ChatGPT retains and uses anything you share to further train their models. For example, recent reporting indicates that engineers are using ChatGPT to assist in identifying solutions to malfunctioning databases. Others are entering meeting notes to generate meeting minutes more efficiently. ChatGPT now captures this sensitive information and uses it to train the LLMs. Anyone who gives the correct prompt asking for the type of data freely entered by these employees could potentially see the data.

Criminals Utilizing AI for Illicit Acts

As AI technology continues to advance, cyber criminals are also finding new ways to exploit it for their own gain. For example, using sophisticated AI algorithms, like GPT-4, for crafting convincing phishing emails or generating fake social media accounts for the purpose of social engineering attacks.

Criminals may also use AI to automate attacks, such as brute-force attacks on passwords or automated scans for vulnerabilities in software. Criminals can carry out these attacks at scale, targeting many victims simultaneously.

Beyond phishing and password cracking we are seeing AI chatbots in extortion and “kidnapping” cases. In April of 2023 an Arizona mother reported to authorities that she received a call indicating that someone had kidnapped her daughter while on a ski trip with friends. She told authorities she could hear her daughter’s voice crying and talking and a man demanding one million US dollars. After calling authorities, the mother confirmed her daughter’s safety and only then realized that the entire incident had been a scam powered by AI. AI voice technology has evolved to the point where an attacker can copy a person’s tone and manner of speaking from only a few captured words. A person’s outgoing voicemail message is enough for an attacker to steal your voice and potentially use it for illicit means.

Protect Your Infrastructure

While AI Powered chatbots offer many benefits, such as improved threat detection and response times, they also come with cybersecurity implications of potential privacy and security risks. As organizations continue adopting chatbots into their security strategies, establishing guidelines and accountability is crucial in their implementation, as well as ensuring that privacy and security protocols are in place to safeguard sensitive data. By doing so, we can utilize the power of AI to strengthen cybersecurity defenses while ensuring measures are in place to better protect against an ever-evolving threat landscape.