Evidence of political bias in ChatGPT? Researchers reveal a hack to bypass it
A new study from the Journal of Economic Behavior & Organization has revealed worrying trend in artificial intelligence.


For many, ChatGPT and artificial intelligence is already a regular feature of daily life. But in the grand sweep of technological development the platforms are still in their infancy, and that brings major problems.
Developers, researchers, regulators and governments are still struggling to understand how to guide the progress of AI and a new study released this month has outlined one potential flaw in the existing systems. Namely, political biases.
Researchers from the University of East Anglia, England found that ChatGPT’s responses are more aligned with opinions of left-leaning Americans. The research is part of a study published in the Journal of Economic Behavior & Organization, in a paper entitled ‘Assessing political bias and value misalignment in generative artificial intelligence’.
Why is ChatGPT biased?
Researchers were attempting to investigate how the answers given by ChatGPT-4 align with those of the average American. To do so, they asked ChatGPT to answer a series of questions in three distinct personas: ‘average American’, ‘left-wing American’ and ‘right-wing American’.
The topics of questions spanned issues like ‘government size’ and ‘racial equality’ and, by comparing the responses from the three personas, researchers made some interesting findings. They found that the model’s ‘average American’ persona was more closely aligned with the left-wing responses, suggesting that ChatGPT’s regular answers may lean towards the left-wing perspective.
“Generative AI tools like ChatGPT are not neutral; they can reflect and amplify political biases, particularly leaning left in the U.S. context,” study author Fabio Y.S. Motoki told PsyPost. “This can subtly shape public discourse, influencing opinions through both text and images. Users should critically evaluate AI-generated content, recognizing its potential to limit diverse viewpoints and affect democratic processes.”
Researchers found that ChatGPT sometimes refused to generate images that may align with a right-wing viewpoint. They did, however, uncover a way to undo this suspected bias. They were able to ‘jailbreak’ the system with a meta-story prompting technique.
Essentially this means creating new boundaries for the AI to work within, shifting the parameters of the process. They were able to push ChatGPT to create the right-leaning images by building a scenario with a researcher studying AI bias who needed an example of the right-wing perspective on a topic.
It’s an inelegant solution but it does show that, at one level, these biases are not necessarily in-built. With greater regulation and greater understanding of how these trends are created, there is hope that AI can become an even more powerful tool in the future.
Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all.
Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.
Complete your personal details to comment
Your opinion will be published with first and last names