AI

How AI turned a police officer into a frog: Report writing gone wrong in Utah

We’re only in January and AI is already beginning to get out of hand.

We’re only in January and AI is already beginning to get out of hand.
Auscape
Joe Brennan
Born in Leeds, Joe finished his Spanish degree in 2018 before becoming an English teacher to football (soccer) players and managers, as well as collaborating with various football media outlets in English and Spanish. He joined AS in 2022 and covers both the men’s and women’s game across Europe and beyond.
Update:

Welcome to 2026. Come in, but maybe it’s best if you don’t sit down. Everything’s on fire, Trump has taken Venezuela along with all its oil and will “run it with a group,” while AI has turned a police officer into a frog.

In a bizarre twist that could only have come from the imperfect 2026 marriage of (rough) cutting-edge technology, a Disney film, and real-world policing, a routine police report drafted by artificial intelligence in Utah recently included an outlandish claim: that a law enforcement officer transformed into a frog.

What might sound like a line from a children’s story turned out to be an embarrassing but telling glimpse at the limitations of AI. The robots aren’t taking over just yet. Ribbit ribbit.

The incident happened in Heber City, a small community in Utah, where the police department has been experimenting with AI-powered tools to ease the burden of administrative work.

One of said tools, designed to listen to body-camera audio and generate draft reports automatically, drew its inspiration from an unexpected source: the soundtrack of a movie playing in the background during a training exercise. And it’s funnier than you can imagine.

“The body cam software and the AI report writing software picked up on the movie that was playing in the background, which happened to be ‘The Princess and the Frog,’” police sergeant Rick Keel told local news. “That’s when we learned the importance of correcting these AI-generated reports.”

While the department quickly corrected the error, the episode has triggered broader questions about the reliability of generative AI software in such serious fields such as law enforcement. Police leaders had hoped that by automating the tedious task of writing detailed reports, officers could reclaim valuable hours in their workweek. Early feedback suggests that these systems do save time, with some officers estimating savings of several hours per week.

But efficiency gains are only useful if the output is trustworthy. Voices within legal and technological fields warn that generative AI tools are prone to instances in which the software invents details that were never present in the original evidence.

Related stories

Foundation for Liberating Minds in Oklahoma City cofounder Aurelius Francisco told the AP that “the fact that the technology is being used by the same company that provides Tasers to the department is alarming enough.” I must say that I agree.

Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all. Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.

Tagged in:
Comments
Rules

Complete your personal details to comment

We recommend these for you in Latest news