·5 min read

March 2026’s AI Phishing Wave Exposed a New BEC Playbook

Foresiet’s March–April incident roundup suggests AI is now compressing the full business-email-compromise loop: research, impersonation, and persuasion into minutes. Which controls still work when a fake executive can be spun up, tailored, and deployed at machine speed?

On March 18, 2026, Foresiet described one of the more telling AI-enabled incidents in its roundup: a finance manager got a message that appeared to come from the CFO, referenced a live vendor payment dispute, and asked for a wire change before “the window closes.” The lure was ordinary. The method was not. The attacker had already scraped the executive’s public calendar, recent hiring posts, and a partner announcement, then used AI to draft a message that matched the company’s tone, timing, and internal jargon closely enough to pass the first glance test.

That is the new BEC playbook. Not “better phishing.” Faster reconnaissance, cleaner impersonation, and more convincing pressure, compressed into minutes instead of days. Foresiet’s March–April 2026 incident roundup, along with trend data cited from IBM X-Force and Akamai, points to the same ugly conclusion: AI is shrinking the business-email-compromise loop to machine speed, and the controls that used to buy you time are getting less of it.

The old BEC model is broken

The usual advice is still the same: train users, enforce MFA, and verify payment changes out of band. None of that is wrong. It’s just incomplete enough to be dangerous. Plenty of teams still treat BEC as a phishing problem, which is how you end up staring at a wire transfer approved by someone who has sat through three security awareness modules and one compliance refresher. Very reassuring. Very expensive.

The old model assumed attackers needed time to craft a believable message, so you could catch them on grammar, awkward context, or a mismatched sender domain. That assumption was already shaky after the 2022 Lapsus$ breaches, where social engineering and account takeover beat more “sophisticated” controls at Okta, Microsoft, Nvidia, and Samsung. AI removed the bottleneck. Now an attacker can research the target, clone the executive voice, and tailor the message to a current business event before your help desk finishes opening the ticket.

AI compresses the entire BEC kill chain

Foresiet’s March–April incident set shows the real shift: AI is not just helping with phishing copy. It is compressing the whole kill chain. A fake CEO no longer needs a week of manual prep. A model can ingest public LinkedIn data, press releases, leaked org charts, and prior email phrasing, then generate a plausible request that references the right project, the right vendor, and the right urgency. That matters because BEC succeeds on timing and context, not on technical exploit chains.

This is also why the old “spot the typo” guidance is dead weight. AI-generated phishing can be clean, localized, and role-specific. More importantly, the payload is often a conversation, not a link. The attacker may start in email, move to SMS, then pivot to Teams or Slack once they know who answers quickly. If your detection logic only looks for malicious attachments or known-bad domains, you’re watching the wrong layer of the attack.

The more dangerous twist is not the fake executive. It’s the fake process. If an attacker can convincingly ask for a “routine” change to invoice routing, payroll details, or vendor banking, they do not need to compromise the CEO account at all. They only need to impersonate the authority structure long enough to get a trusted employee to bypass the normal check. Identity is still the real attack surface; email is just the delivery vehicle.

What to do instead

Start with controls that do not depend on human pattern recognition. For payment workflows, require out-of-band verification through a known-good contact path already stored in your finance system, not the number in the email thread. For high-risk changes, use dual approval with separate channels and separate identities. If a single inbox can authorize a wire, you have built a fraud machine and called it efficiency.

Then harden the identity layer where BEC actually lands. Enforce phishing-resistant MFA for executives, finance, and help desk staff using FIDO2/WebAuthn, not SMS codes that can be intercepted or socially engineered. Lock down mailbox rules, OAuth grants, and session tokens, because modern BEC often turns into account takeover once the attacker gets a foothold. Microsoft, Google Workspace, and Okta all expose the logs you need; if you are not reviewing sign-in anomalies, impossible travel, new device enrollments, and suspicious consent grants, you are leaving the front door open and admiring the lock.

You also need to test your AI integrations. If your team is using copilots, chat agents, or internal LLMs with access to email, tickets, or CRM data, red-team those systems for prompt injection and data leakage. Foresiet’s incident roundup is a reminder that the attack surface now includes anything your AI can read and repeat. If your threat model does not include your own prompts and your own data connectors, it is not a threat model. It is a slide deck.

Finally, keep the boring controls that still work: least privilege, network segmentation, and audit logs. Limit who can change payment destinations, who can approve exceptions, and who can export contact data. Make sure audit trails capture the full chain: sender, mailbox rules, OAuth consent, session creation, and approval history. Compliance frameworks will happily certify your documentation while attackers move money through your “approved process.” That is theater, not defense.

Bottom line

March 2026’s AI phishing wave did not invent BEC. It industrialized it. Foresiet’s roundup shows attackers can now research, impersonate, and persuade at machine speed, which means the window for human judgment is getting smaller every month.

If you want to reduce the risk, do the unglamorous work: require phishing-resistant MFA for high-risk users, force out-of-band verification for payment changes, split approval duties across separate channels, monitor mailbox and OAuth abuse, and restrict who can touch payment destinations or admin actions. Test your AI tools for prompt injection and data leakage before an attacker does it for you. If you are still relying on someone to notice a slightly off tone in an email from “the CFO,” you are betting the treasury on vibes. That is not a control. That is a future incident report.

Related posts

RAG Security in 2026: Stop Prompt Injection Before It Reaches Production

Retrieval-augmented apps are now a top AI attack surface because poisoned documents can steer model answers, leak secrets, or trigger unsafe actions. This post shows the controls teams are using to sanitize sources, isolate tools, and verify retrieved context before generation.

Why AI Red Teaming Is Becoming a Core Security Control

As more teams ship LLM-powered products, red teaming is shifting from a one-time test to a recurring control that finds prompt injection, data leakage, and unsafe tool use before attackers do. The question is no longer whether to test your model, but how to do it continuously without slowing delivery.

Prompt Injection Defense: Why AI Gateways Are Becoming a Security Control

As LLM apps move from pilots to production, prompt injection is turning AI gateways into a practical control point for filtering malicious inputs, enforcing policy, and logging risky model calls. The real question is no longer whether to deploy one, but how to make it effective without breaking useful workflows.

← All posts