AI and Health

He asked ChatGPT how to improve his diet and ended up poisoned with paranoia and hallucinations

A man suffered bromide poisoning after ChatGPT allegedly suggested it as a salt substitute—highlighting AI risks in health advice.

CEO de ChatGPT
Maite Knorr-Evans
Maite joined the AS USA in 2021, bringing her experience as a research analyst investigating illegal logging to the team. Maite’s interest in politics propelled her to pursue a degree in international relations and a master's in political philosophy. At AS USA, Maite combines her knowledge of political economy and personal finance to empower readers by providing answers to their most pressing questions.
Update:

Seeking health advice, a 60-year-old man turned to ChatGPT. His case, reported in Annals of Internal Medicine: Clinical Cases, serves as a cautionary tale about the potential health risks of relying on large language models.

When the patient arrived at the hospital, he was experiencing hallucinations and claimed his neighbor was poisoning him. He had no prior psychiatric or medical history and was not taking any medications or supplements. The medical team was understandably puzzled. After conducting additional tests and consulting Poison Control, they determined the likely cause of his symptoms was bromism—poisoning from bromide.

The patient told doctors he had been following a highly restrictive vegetarian diet and was trying to eliminate sodium. He turned to ChatGPT for alternatives and said the bot recommended bromide, a toxic substance that explained his symptoms.

“For three months, he had replaced sodium chloride with sodium bromide purchased online after consulting ChatGPT, where he read that chloride could be swapped with bromide—likely in reference to cleaning purposes,” explained the study’s authors.

After two weeks of treatment, the patient was discharged and made a full recovery.

A warning about the dangers of AI and chatbots

The authors highlighted the case as a critical example of how AI can contribute to adverse or preventable health outcomes. They believe the patient was using ChatGPT-3.5 or 4 and expressed hope that improvements would be made to prevent similar incidents in the future.

“Unfortunately, we do not have access to his ChatGPT conversation log and will likely never know exactly what output he received, as responses are unique and build on previous inputs,” the authors noted after attempting to replicate the chatbot’s advice.

Related stories

Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all.

Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.

Tagged in:
Comments
Rules

Complete your personal details to comment

We recommend these for you in Latest news