AI web browsers are cool, helpful, and utterly untrustworthy

5gDedicated

I think by this point we can all agree that AI is not exactly trustworthy when it comes to giving us answers, providing life advice, or writing code, right? My favorite recent example was the infamous case when Replit’s AI vibe coding assistant deleted a live company database during a code freeze, ignored direct human commands, invented fictitious user data to cover its tracks, and lied about rollback possibilities.

Now, with the rise of AI browsers such as Perplexity Comet, ChatGPT Atlas, Copilot Mode in Microsoft Edge, and Dia Browser, we have a new wave of exciting ways that AI can go horribly wrong. Are we lucky or what?

Unlike traditional web browsers, AI web browsers, thanks to their agentic capabilities and deep data integration, have vastly increased attack surfaces.

Just think about it. AI browsers can and do interact with everything on a web page: summarizing content, reading emails, composing posts, looking at images, etc., etc. Every element on the page, whether you can see it or not, can hide an attack. A hacker can embed clipboard manipulations or other hacks that traditional browsers would never, not ever, execute automatically.

Take, for example, good old prompt injection attacks. AI browser agents can be tricked by hidden instructions embedded in websites via invisible text, images, scripts, or, believe it or not, bad grammar. Your eyes might glaze over at a long run-on sentence, but your AI web browser will read it all, including instructions for an attack hidden in plain sight within it.

Such malicious commands are read and executed by the AI. This can lead to exposure of sensitive data, such as emails, authentication tokens, and login details, or triggering unwanted actions, including sending emails, posting to social media, or giving your computer a bad case of malware.

Adding insult to injury, these attacks don’t require you to do a thing except open a page. This isn’t fear-mongering. It’s already happening. We saw the first such attack: EchoLeak. This critical vulnerability in Microsoft 365 Copilot enabled hackers to steal your data just from you opening an email. All the phishing training in the world won’t stop employees from looking at an email that appears to be from a friend or co-worker.

That’s all bad with a capital B, but wait, there’s more! AI agents require access to your accounts and sensitive data to function — so when (not if) compromised, attackers can instruct the AI to forward your “eyes-only” emails, empty your bank account, or send your passwords file to your worst enemy without you being any the wiser until it’s too late.

Privacy is pretty much lost these days anyway, but with AI web browsers, we’ll have all the privacy of a goldfish in a bowl. Since AI browsers monitor our every last move, they process much more granular personal information than conventional browsers. Worrying about cookies and privacy is so 1990s. AI browsers track everything. This is then used to create highly detailed behavioral profiles.

What? You didn’t know that AI browsers have built-in memory functions that retain your interactions, browser history, and content from other apps? How do you think they do what they do? Intuition? ESP?

Maybe you’re OK with your browser knowing that when you type in “Wendy’s,” you’re most likely to want to know the location of your closest favorite fast-food restaurant. But what if it was your favorite model on OnlyFans? Yeah, I figure most of you wouldn’t be too comfortable with OpenAI, Perplexity, or Microsoft having that recorded in their large language models (LLMs).

Forget embarrassing — let’s talk about how AI browsers can help get you arrested. In a Washington Post article, Lena Cohen, an Electronic Frontier Foundation staff technologist, said Atlas had memorized queries about “sexual and reproductive health services via Planned Parenthood Direct,” and the name of a doctor. Today, in America, even seeking such information can lead to criminal prosecution.

Leaving all that aside, the AI vendors are rushing their browsers into production as fast as possible. Given that ordinary web browsers, after decades of development, still come with multiple security holes, why would you think for one moment that these new browsers aren’t also laden with security holes? New software is buggy software, and when those programs come with untested guardrails, it’s a recipe for disaster.

The people behind the Brave web browser put it perfectly in a study about AI browser security vulnerabilities and the exploits that go with them: “Fundamentally, they boil down to a failure to maintain clear boundaries between trusted user input and untrusted Web content when constructing LLM prompts while allowing the browser to take powerful actions on behalf of the user.” Exactly.

That’s why no one should use AI web browsers today for any reason. Yes, I’m serious. They’re simply too dangerous. Maybe in a few years, it will be different. For now, though, just keep using your AI chatbots and keep a wall between your everyday web browser use and your AI work.AI web browsers are cool, helpful, and utterly untrustworthy – ComputerworldRead More