Most people think scams start with shady emails or random pop ups. But an even bigger danger is playing out in plain sight: scam advertisements embedded in the advertising systems of trusted platforms. These ads are not visible only on obscure corners of the web. They are appearing in places we rely on every day.
A recent deep dive conversation on the Scam Rangers podcast with Sarah Ralston from the Media Trust highlighted how this problem works in practice. When we connect that conversation to three major Reuters investigations, the stakes become even clearer. Major tech platforms may be profiting from the very scams that harm users, sometimes at massive scale, and consistently underestimating the human impact of online harm.
To understand what is happening and why it matters, it helps to see this problem from three intersecting angles.
1. Scam Ads Are Everywhere, Even on Trusted Platforms
Investigations by Reuters have shown that scam advertising is not a fringe issue. It is a multi billion dollar business woven into mainstream ad systems.
In one investigation, internal Meta documents revealed that as much as 10% of its total ad revenue, roughly 16 billion dollars, could be tied to scam ads or ads for banned goods such as fake products and fraudulent services. Meta’s own systems reportedly displayed higher risk scam ads billions of times a day across Facebook, Instagram, and WhatsApp.
That figure matters not because everyone clicks on those ads, but because they are visible at scale on platforms billions of people use daily. The systems that deliver these ads are designed for performance, not safety. That is where scammers exploit gaps.
2. Financial Incentives and Enforcement Gaps Make the Problem Worse
Another Reuters investigation explored how Meta attempted to reduce scam advertising and then scaled those efforts back.
According to internal reporting, Meta built anti-fraud teams focused on reducing scam advertising, particularly involving bad actors in China. After initial progress, internal strategy shifts reduced enforcement, even as revenue from fraudulent ads surged back toward earlier highs.
This is not simply a technical failure. It is a business decision. Platforms weigh the revenue earned from ads against the cost of stricter enforcement. When enforcement threatens meaningful revenue, harmful ads stay live longer and reach more users.
In practice, this allows scams to operate longer, spread wider, and cause greater financial harm.
3. The Human Harm Is Documented and Ongoing
A third Reuters report reveals how evidence of harm is sometimes handled internally.
Court filings cited by Reuters allege that Meta buried or downplayed internal evidence of social media harm, including mental health harm to teens.
This matters because it reflects a broader pattern. Platforms may recognize harm internally, whether mental health damage or financial fraud, while external transparency, regulation, and enforcement lag far behind. The same dynamic plays out with scam advertising. Internal awareness does not always lead to decisive external action.
Scam Ads Are a Psychological and Structural Exploit
Scam ads succeed because they target human psychology, not just technical weaknesses.
They rely on fear, urgency, and emotional manipulation. Fake virus alerts, limited time offers, or official looking branding override rational judgment. When these tactics appear inside trusted platforms, users are far more likely to engage.
This is why scam ads are so dangerous. They are not random or obvious. They are engineered to blend seamlessly into environments people trust.
Why Detection Alone Keeps Failing
Traditional cybersecurity tools focus on networks, systems, and known malicious domains. Scam ads break these assumptions because they are served inside legitimate platforms.
Scammers also use cloaking techniques. They show benign content to reviewers and harmful content to real users. This allows malicious ads to pass internal reviews while attacking consumers in real time.
As long as platforms evaluate ads in artificial conditions instead of real world user environments, these gaps will persist.
What Needs to Change
Scam prevention cannot rely on educating users to be more careful. The solution must be systemic.
Detection must reflect real user conditions, not sanitized review environments. Platforms must be accountable when revenue is tied to fraud. Transparency and enforcement need to follow evidence of harm. Collaboration across regulators, platforms, and safety organizations must become the norm.
Organizations like the Global Anti-Scam Alliance are beginning to enable this kind of cross industry collaboration, but it must scale much further.
The Bigger Picture
Scam ads are not isolated failures. They are symptoms of business models that prioritize monetization over user protection, even when harm is measurable and known.
When internal revenue figures tied to scams, reduced enforcement efforts, and suppressed evidence of harm are viewed together, a structural problem emerges. Fixing it requires accountability, transparency, and incentive structures that reward safety rather than scale at any cost.



.jpg)
