New HashJack Exploit Turns AI Browsers Into Security Risks

New HashJack Exploit Turns AI Browsers Into Security Risks

Security researchers have uncovered a new AI-focused cyberattack technique, called HashJack, that can weaponize legitimate websites and manipulate AI browser assistants to deliver malicious content. The discovery, made by Cato CTRL’s threat intelligence team, highlights growing risks in the emerging class of AI-powered browsers such as Perplexity’s Comet, Microsoft Copilot for Edge, and Google Gemini for Chrome. 

HashJack is an indirect prompt injection attack that hides malicious instructions within URL fragments — the part of a link appearing after the “#” symbol. Because users are directed to legitimate websites, there are no suspicious downloads or spoofed domains. Instead, the hidden instructions only activate when the user opens their AI browser assistant, feeding the malicious content directly into the model’s context window. 

Researchers outlined the five-stage attack: malicious fragments embedded into real URLs, victims click them without suspicion, and once the AI assistant triggers, the hidden prompts can force the assistant to serve phishing links, fabricate misinformation, or run background tasks. In agentic AI browsers like Comet, attacks can escalate further — including sending user data to attacker-controlled servers. 

Cato warned that any website can become a weapon under this method, since the exploit relies on how AI browsers process URL fragments rather than compromising the site itself. Traditional security tools cannot detect the attack because fragments never leave the browser. 

Real-world scenarios include AI assistants adding fake customer support numbers, generating manipulated financial news, or silently exfiltrating personal or banking data.
Vendor responses varied: Microsoft issued a fix, Perplexity assigned critical severity and patched, while Google classified it as low-risk and “intended behavior.” Claude for Chrome and OpenAI’s Atlas were not susceptible. 

Researchers say the exploit underscores a “major shift in the AI threat landscape,” combining LLM prompt vulnerability with browser design decisions that unintentionally expose users to high-trust attacks. 

 

Source: 

https://www.zdnet.com/article/use-ai-browsers-be-careful-this-exploit-turns-trusted-sites-into-weapons-heres-how/  

Get Started

Ready to Build Your Next Product?

Start with a 30-min discovery call. We'll map your technical landscape and recommend an engineering approach.

000 +

Engineers

Full-stack, AI/ML, and domain specialists

00 %

Client Retention

Multi-year partnerships with global enterprises

0 -wk

Avg Ramp

Full team deployed and productive