Hackers Are Now Using AI Platforms to Steal Microsoft 365 Credentials

A new twist in the world of phishing is here—and it’s more dangerous than ever. In a shocking real-world case uncovered by Cato Networks, hackers weaponized a legit AI marketing platform to pull off a sophisticated phishing attack and steal Microsoft 365 credentials. If you're using AI tools in your business, this one’s a must-read.

 

The Real Story: How Hackers Weaponized AI in 2025

In July 2025, cybersecurity firm Cato Networks uncovered a phishing campaign that successfully compromised at least one U.S. investment firm. But this wasn’t your run-of-the-mill phishing scam. Instead of shady links or suspicious email domains, hackers used Simplified AI—a legitimate and widely-used marketing platform—to trick victims and steal Microsoft 365 credentials.

 

Let’s break down how it all went down.

 


Step-by-Step Breakdown of the Attack

The hackers didn’t just throw together a sloppy email blast. This campaign was slick, well-planned, and incredibly deceptive. Here's how it worked:

 

1. Impersonation of Executives

The phishing emails appeared to come from real executives at a global pharmaceutical company.

  • Real names pulled from LinkedIn

  • Authentic company logos and email formatting

  • Social engineering at its finest

2. Password-Protected PDFs

The emails contained encrypted PDF files—passwords were included in the message body.

  • These evaded security scanners (since scanners can't read encrypted files)

  • PDFs included fake internal documents with links

3. Redirection to Simplified AI

Victims who clicked the link were taken to app.simplified.com, a trusted domain used by real businesses for marketing.

  • The page was made to look like a legitimate dashboard

  • It featured company branding and Microsoft 365 imagery

4. Fake Microsoft 365 Login Page

The final step redirected users to a fake Microsoft 365 login portal, stealing their credentials when entered.

 


Why This Phishing Attack Was So Dangerous

This wasn’t a sloppy phishing scam. It was designed to slip past firewalls, email filters, and even cautious employees.

 

Here’s why it worked:

 

Use of Trusted AI Platforms

IT teams had already whitelisted Simplified AI as a trusted domain. That means no red flags were triggered by network tools.

 

AI Tools Embedded in Corporate Workflow

Employees are used to seeing AI tools like Simplified AI in their daily work. That familiarity was weaponized.

 

Bypassing Traditional Security

Password-protected PDFs and known domains easily slipped through traditional threat detection systems.

"Threat actors are no longer relying on suspicious servers or cheap lookalike domains," Cato’s report warns.
"Instead, they abuse the reputation and infrastructure of trusted AI platforms."


What Makes This a Wake-Up Call for Businesses?

AI platforms like Simplified AI are designed to boost productivity, streamline content, and help teams innovate. But this attack revealed a new truth:
Anything that builds trust can also be exploited.

 

This incident is part of a growing trend of “Shadow AI”—where AI tools are integrated into workplaces without full security oversight. Marketing teams, sales reps, even HR departments are bringing in AI tools, often without looping in IT or compliance.

 


How to Protect Your Organization from AI-Based Phishing Attacks

Don't panic—but do prepare. Here’s what cybersecurity pros now recommend in light of this attack:

 

1. Enable Multi-Factor Authentication (MFA)

Make MFA mandatory for all critical services, especially Microsoft 365. Even if credentials are stolen, MFA adds a powerful barrier.

 

2. Train Employees on Password-Protected Attachments

Many still believe password-protected files = secure. But attackers now use this as a tactic to bypass scanners.

 

3. Monitor AI Platform Usage

Deploy tools that track and analyze which AI tools are being accessed, by whom, and how frequently. Shadow AI is a real threat.

 

4. Don’t Automatically Trust Known Domains

Just because a domain is familiar (like app.simplified.com) doesn’t mean the content being served is safe.

 

5. Invest in Behavioral Threat Detection

Legacy security tools often rely on known patterns. Newer systems use AI to detect suspicious behavior—even when it comes from “safe” sources.

 


What This Means for Medium and Large Businesses

This attack shows that no organization is immune, and no tool is too trusted to be exploited.

 

Let’s recap what businesses need to do:

 

Action Item Why It Matters
Reassess trusted platforms Even AI tools used daily can be abused
Update phishing training Modern phishing looks nothing like old scams
Audit employee AI usage Employees often install tools without IT knowledge
Treat AI traffic like external traffic Trust, but verify—always

 


Final Thoughts: AI Is Powerful—and Risky

Artificial intelligence platforms aren’t going away—in fact, their use will only increase across industries. But as this 2025 phishing attack proves, trusting a platform without scrutiny can backfire in a big way.

Cybersecurity in the age of AI must go beyond blacklists and antivirus software.
It requires continuous oversight, employee awareness, and smart threat detection systems.
Most of all, it requires businesses to treat AI traffic with the same suspicion as anything else online.

 


Take Action with ForceNow

At ForceNow, we specialize in securing your Microsoft 365 environment and protecting your business from advanced cyber threats like AI-enabled phishing attacks.

 

We also offer AI Knowledge Workshops to help your teams safely leverage AI tools without compromising security.

 

Don’t wait for an attack to wake up your organization—partner with ForceNow to stay ahead of the curve.

 

Contact us today to schedule a consultation!

 

Back to Blog