An expert sounds the alarm: people are starting to talk like ChatGPT
How AI’s quirks are slipping into human conversation, from classrooms to casual chats, and reshaping the way we speak.

It’s happening, and some are scared of it: More and more people are starting to talk like ChatGPT. Phrases and words characteristic of large language models (LLM) are making the leap from screens to human conversations, marking a quiet transformation in the way we communicate.
Linguist Adam Aleksic, author of Algospeak: How Social Media Is Shaping the Future of Language, warns in an article for The Washington Post that “English speakers are beginning to sound like the inhuman interlocutor on the other end of the line.”
The language of machines
Chatbots such as ChatGPT, Claude or Gemini don’t process language the way people do. Their answers are not “thought” in natural language. Instead, they first convert words into numbers in a vector space – a kind of two-dimensional map of language. Then, step by step, they predict the most likely output based on training data and human feedback.

That technical process, while effective, introduces inevitable biases. For instance, researchers at the University of Florida found that ChatGPT uses the word delve far more often than usual. This tendency may have been reinforced during fine-tuning by human evaluators – many of them low-paid workers in countries such as Nigeria or Kenya – where the use of delve is more common than in American or British English.
AI in conversations
The issue, Aleksic argues, is that these linguistic habits no longer remain confined to AI outputs. “Overuse has spread into global culture,” he warns.
In the two years since ChatGPT launched in late 2022, the use of the term delve in academic publications has increased tenfold. Researchers who rely on AI to draft or polish their writing have begun to absorb these patterns into their own style.
The phenomenon isn’t limited to academia. A recent study published in Scientific American found that people now use delve more frequently in spontaneous conversation – a sign that machine biases in language are filtering into everyday life.

A feedback loop and a bias warning
Psycholinguistics has long shown that the more we see a word, the more readily available it becomes in our mental vocabulary. When AI repeatedly introduces a term at an unusual rate, it eventually becomes naturalized into human speech.
As a result, people begin to sound more like machines, while language models are trained on texts increasingly shaped by AI. The outcome is a feedback loop where the line between human and artificial language grows ever more blurred.
For many linguists, the shift isn’t inherently negative – a word becoming popular doesn’t automatically impoverish language. But Aleksic stresses that the implications reach far beyond vocabulary. “AI models are not neutral,” he warns. Alongside linguistic biases, they also carry gender, racial and political ones – harder to measure, but no less present.
Related stories
Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all.
Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.
Complete your personal details to comment