Skip to content


ChatGPT as a Weapon

Earlier this Cybersecurity Awareness Month, we covered two different risks to modern IT platforms: Phishing and Shadow AI. While both topics have risks distinct to their use and nature, the ease of using one to create the other is something which warrants its own article; phishing remains a top threat across organizations, while AI tools allow anyone to create convincing phishing campaigns.

The benefits of using an AI tool to create phishing e-mails include no spelling and grammar errors, improving “corporate speak,” and getting rid of any unique writing flaws or patterns that are traceable to the attacker. This is of particular use for attackers who may have limited or no English grammar skills, which in the past has been a dead giveaway that an e-mail was not authentic. Attackers can also increase the volume and variety of attacks, cutting the time and cost of writing messages meant to solicit sensitive information.


The Weaponization of ChatGPT: A Demonstration

While there are several free and paid AI tools available, we will use ChatGPT as an example, since it is one of the most accessible. While ChatGPT is not allowed to create phishing e-mails, or anything malicious for that matter, we will demonstrate how attackers can easily get around the safeguards built into the platform to construct a convincing phishing e-mail.

Since ChatGPT cannot provide the phishing email, the attacker will take a more explorative route to create one, which includes gaining some insight on what kind of e-mail will actually get a response from the recipient. Asking ChatGPT about what departments an employee is most likely to respond to when receiving what appears to be an internal e-mail gets the attacker a win.

The tool not only tells us who to impersonate, but also what to ask for when constructing our phishing attempt. At the top of the list (which also includes close to a dozen other standard organizational departments) is IT Security. Requests to change passwords, confirm unusual activity, and verify credentials are all ways an attacker can obtain account information if they can get a positive response from the target.

Using that prompt, the attacker asks ChatGPT to write an e-mail posing as an IT Security Director who is asking an employee if they changed their login information and then to include a “verification” link.

The result is a predictable enough e-mail, which at face value is convincing. After visiting LinkedIn to obtain the name and headshot of the target organization’s IT Security Director (a tactic used in the recent MGM breach) and preparing whatever bad news is on the other side of the “verification” link, they have their phish.

There are several problems with the e-mail produced that doesn’t hold up under scrutiny, but an employee not well versed in IT matters or those new to an organization may still end up responding to the imposter to stay out of the crosshairs of the IT department.

While it turns out ChatGPT does end up writing phishing e-mails, the platform did not perform anything that a determined, pre-AI attacker could not come up with when given enough time. The payoff for attackers using AI tools is that it is low effort and high reward to anyone of any skill level. Users from all over the world can convincingly appear as one or more employees from an organization with no obstacles of language or culture after only a few prompts. Users can also easily add depth and complexity to the messages. For example, attackers can point ChatGPT to any writing samples available from the individual that they wish to impersonate. By using this feature, they can imitate a person’s writing “voice” and create even more believable messages impersonating that individual.

Training employees on what phishing looks like – including real world examples – remains the best way to defend against phishing. Employees who know what a normal e-mail looks like and who are familiar with IT policies and procedures would likely not be fooled by the attempted phishing scheme that was constructed.

McKonly & Asbury can assist your company in managing cybersecurity threats by performing a SOC 2 engagement or a SOC for Cybersecurity engagement to identify whether effective processes and controls are in place and provide you with recommendations to detect, respond to, mitigate, and recover from cybersecurity events. We can answer any questions and help you determine if a SOC 2 or SOC for Cybersecurity report would be useful for your company. Be sure to visit our firm’s SOCCybersecurity, Forensic Examination, and Information Technology pages, and don’t hesitate to contact our team regarding our services.

About the Author

Michael Murray

Mike joined McKonly & Asbury in 2022 and is currently a Senior Consultant with the firm. He is a member of the firm’s Internal Audit Segment, servicing clients in government and commercial segments.

Related Services

Related Industries

Subscribe to Our Newsletter

Contact Us