A warning from cybersecurity experts: ChatGPT’s Atlas could be a hacker’s best weapon
Experts warn of serious security flaws that facilitate phishing attacks and data theft.

The Atlas browser, developed by OpenAI and presented as the next evolution of AI-powered browsing with ChatGPT integration, is facing an unexpected crisis. Despite its promises of innovation and productivity, a recent study has revealed that Atlas is 90% more vulnerable to phishing attacks than Chrome or Edge – setting off alarms across the cybersecurity community.
According to the report, Atlas failed to block 94.2% of real-world phishing attempts, far below the protection levels offered by traditional browsers. The main cause appears to be a lack of robust anti-phishing defenses and an architecture that keeps authentication tokens active, leaving users exposed during browsing sessions.
Hackers find an open AI door
Experts warn that the threat goes beyond phishing. Serious vulnerabilities have been detected related to a technique known as cross-site request forgery (CSRF), which allows attackers to trick users into visiting a seemingly harmless page that injects malicious instructions directly into Atlas’s integrated chatbot memory.
What’s most alarming is that these injected instructions persist between sessions, replicating themselves in future ChatGPT conversations – even when switching devices or browsers. In other words, once compromised, a user could remain infected without ever realizing it.

LayerX, the cybersecurity firm that demonstrated the issue, showed a relatively harmless example – a prompt that made the system play a song whenever it connected to the user’s home Wi-Fi. But experts caution that the same technique could steal personal data, install malware, or grant remote control of the device.
A blurred line between data and commands
George Chalhoub, a professor of human-computer interaction at University College London, told Fortune that these attacks reveal a deeper problem: “There will always be residual risks around prompt injections, as it’s inherent to systems that interpret natural language and execute actions.”
Chalhoub warned that the true danger lies in blurring the boundary between data and instructions. In his words, “It could turn an AI agent into an attack vector against the user,” with consequences as serious as unauthorized access to emails, personal messages, or passwords stored in the browser.
This type of vulnerability, known as prompt injection, tricks the AI into interpreting hidden malicious commands as legitimate ones. These instructions can be concealed anywhere on a webpage – in a paragraph, an image, or even a blank line.

The high price of innovation
OpenAI describes Atlas as “a virtual assistant inside your browser,” capable of executing actions, summarizing pages, and automating daily tasks. Yet for cybersecurity researchers, that same autonomy dangerously widens the attack surface.
“Atlas could be a hacker’s dream,” some analysts warn. Any error or malicious website could be enough for the browser itself to steal personal information or execute commands without the user’s consent.
As OpenAI works to strengthen Atlas’s defenses, one clear lesson emerges: the more autonomy and power artificial intelligence gains in connected environments, the greater the need to prioritize its security.
Because, as this episode shows, the line between smart assistance and digital vulnerability may be far thinner than it appears.

Related stories
Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all.
Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.
Complete your personal details to comment