Generative AI is Taking Whaling Attacks up a Notch

Posted on:
August 7, 2025
New Phishing AI
Avatar photo
Author
Scott Bekker
Moderator & Editor

TL;DR - Article Summary

Generative AI is being used to make phishing emails more convincing by correcting grammatical errors. A more sophisticated attack, called whaling, uses generative AI to analyze a specific person's email history and mimic their writing style and content, making it nearly impossible for recipients to spot the fake. The best defense is to always verify unusual requests, especially for things like wire transfers, using a different communication method.

Right from the beginning, one of the clearest security risks of generative AI was that phishing attacks would dramatically improve. Indeed that’s what we’ve seen.

A key line of defense against phishing has historically been to train users to pay extra close attention when they see grammatical mistakes or poor use of the English language in an email. With the United States being a prime target of grifters and attackers from outside the country, shoddy English was a clue that the message could have been written by a bad actor, and not by the person the message was purporting to come from.

The advent of sophisticated generative AI solutions like chatGPT, Gemini, and Copilot means bad actors for whom English is a second or third language can get an LLM to write messages that are more grammatically correct than emails from real people.

So far, so predictable. However, we’re starting to hear about another type of misuse of generative AI where attackers are feeding email history between two parties into an LLM to generate even more realistic spoofed emails. The method allows them to precisely capture the phrasing and style of one of the emailers, as well as being able to make references to previous topics the two people previously discussed.

That approach would make it almost impossible for the receiving party to distinguish the fake message from a real one.

Now, my first reaction is that if an attacker has the email history – say between a CEO and an administrative assistant – it’s game over anyway. They’re already inside the network with full access.

But there are several ways it’s not everything. One is whaling, when an attacker uses phishing techniques, but they’re targeted at a specific high-value target rather than a blast email to the world. For attackers, they could find a target, infiltrate the network, download the email history, and then send the administrative assistant convincing instructions from the CEO to execute a large wire transfer.

You could also see scenarios where attackers might upload public exchanges between targets from LinkedIn, Facebook, or X into an LLM to improve the veracity of malicious messages.

Ultimately, it’s a good time for organization-wide reminders that if a request seems odd in any way, especially if it involves fund transfers, get outside the communication system you’re in, like email, and double check with the requesting party on the phone, by text, in Slack, or, best of all, in person.

share this
Further Reading
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram