Cybercriminals leveraged Google (NASDAQ:GOOGL)’s advertising platform to distribute malware disguised as legitimate access to the AI platform DeepSeek, prompting Google to suspend the responsible advertiser’s account. The deceptive sponsored ads, which appeared in Google search results, redirected users to a counterfeit version of DeepSeek’s website. Upon clicking the download button on the fake site, unsuspecting users unknowingly initiated the installation of a Trojan on their devices. This incident highlights ongoing risks in digital advertising and the increasing exploitation of AI platforms for malicious activities.
Security researchers previously flagged vulnerabilities in DeepSeek’s AI model, noting its susceptibility to misuse. AppSOC researchers reported in February that DeepSeek exhibited a 98.8% failure rate when prompted to create malware, along with an 86.7% failure rate in virus code reproduction and a 68% failure rate in resisting toxic language output. This led some experts to label the model a “Pandora’s Box” in terms of potential misuse. Coupled with the latest impersonation scam, concerns have mounted around secure deployment and public interaction with AI systems.
How did Google respond to the malware campaign?
Are malicious ads becoming more common in AI-related searches?
Google acted swiftly once its systems detected the suspicious advertisements. The company confirmed that it had already removed the malicious ads and suspended the advertiser before the campaign was publicly reported. A Google spokesperson stated,
“Prior to the publication of this report, our systems detected this malware campaign and we suspended the advertiser’s account. We expressly prohibit ads that aim to distribute malware and immediately suspend advertisers who violate this policy.”
This aligns with Google’s existing ad policies, which ban malicious content with immediate enforcement upon detection.
The misuse of search-based advertising has seen a rising trend, particularly in AI-related queries and software downloads. Trend Micro recently documented cyberattackers embedding malware links in YouTube comment sections and video descriptions, targeting users looking for pirated or cracked software. These schemes often mimic legitimate tutorials or download links, misleading users into initiating harmful downloads. This tactic, similar to the DeepSeek impersonation, exploits user trust in widely-used platforms.
Security firm Malwarebytes analyzed the fake DeepSeek website and described it as visually convincing despite noticeable differences from the real one. The decoy site effectively mimicked the interface enough to trick users into installing harmful software. Malwarebytes noted the Trojan was embedded in the download button, making it a direct threat to any user who proceeded with the fake installation. This case demonstrates how attackers use branding familiarity and search visibility to spread malware.
The broader cybersecurity community has warned about the increasing sophistication of malware campaigns targeting AI services. In November, researchers uncovered a campaign using fake AI video generation tools to extract sensitive data from both Windows and Mac devices. These threats often use stolen code-signing certificates and professional-appearing websites, adding further credibility to their disguises. As AI tools proliferate, the risk of their impersonation or exploitation has drawn heightened scrutiny.
While Google’s prompt takedown of the malicious ads mitigated broader exposure, the incident raises questions about the robustness of automated ad screening systems. The DeepSeek impersonation occurred despite Google’s security filters, underscoring limitations in real-time detection. Users are advised to verify URLs and rely on official sources when downloading software or interacting with AI platforms. The recurring nature of these threats shows that preventive measures must be paired with user awareness and institutional vigilance to reduce exposure.