TECH NEWS

AI chatbots: Are there alternatives to ChatGPT?

ChatGPT has many in and outside of the tech world talking about the future of artificial intelligence and the problems that it may bring.

DADO RUVICREUTERS

In November 2022, ChatGPT, a chatbot that uses artificial intelligence developed by Open AI, impressed many with its detail and speed in generating responses to user questions.

Earlier this month, Microsoft announced integrating a more “powerful” version of ChatGPT, known internally as Sydney, into its search engine, Bing.

Shortly after, Google announced it would release its AI-assisted search feature, Bard. During the launch of Bard, the AI responded to one of the questions incorrectly, raising doubts about Google’s ability to keep up with competitors. However, Bard’s failures also prompted greater scrutiny of the results provided by Sydney, which uncovered many of the answers generated were full of errors and falsehoods. When podcaster Brace Beldon asked Bing’s Sydney to provide a biography of his co-host Liz Franczak, the AI generated a series of untruths. Sydney reported that Franczak had worked at the New Republic, an outlet she has no professional connection to, and misrepresented key details of the show, TrueAnon, that the pair hosts together.

Ned Edwards, a writer with the Verge, also shared his bizarre experience interacting with Sydney, where the chatbot “confessed” to spying on Bing workers and falling in love with users.

The public failures of both Bing and Google cast light on the risks posed by the unregulated landscape that artificial intelligence and machine learning technologies are being developed within.

What is artificial hallucination?

Artificial hallucination is a concept being introduced to the public after users interacting with these chatbots noticed that some of the information generated was misleading, inaccurate, or false.

Experts use the term hallucination to emphasize that the AI can generate a response that it cannot recognize as untrue, similar to a human who is hearing or seeing something that is not really there.

Bing responds to concerns over misinformation

Bing has responded to these concerns in under the Frequently Asked Questions section on the promotional website for “the new Bing.

The company said that “Bing aims to base all its responses on reliable sources,” while acknowledging that content on the internet, which it uses to generate its answers, “may not always be accurate or reliable.”

Herein lies one of the main issues with ChatGPT and any AI that cannot distinguish well between real and fictitious content. The company places the onus on the user to “use [their] own judgment and double check the facts before making decisions or taking action based on Bing’s responses.” The question of should a user be able to trust the information being generated is very different from whether or not, in practice, they will be critical of the AI-generated content.

The large-scale spreading of misinformation on social media in the United States has shown a general lack of media literacy within the population. Should search engines continue to push these sorts of products, they risk funneling false, inaccurate, or misleading information to a public that may not question the information they are reading.

What is the alternative?

At this point, there is no better alternative to Bing’s updated search feature, and based on the issues these AIs represent, it is unclear whether the continued development of these technologies is desirable.

Syndey and Bard’s launch demonstrates the danger that the prioritization of profit and growth over access to accurate reporting and information poses to the body politic. If the goal of platforms like Bing is to see users obtain their news through questions-based Chatbot interaction, then many logistical and ethical questions come into focus like:

  • Who is creating the content that will be fed into the AI? What biases do those sources hold?
  • Should the content used by the AI be reviewed?
  • How well can the chatbot distinguish between real and fake news?
  • Does the reliability of the AI decrease when it is asked to review material in languages other than Englsih?

It is not hard to imagine a situation where the fossil fuel lobby pays to have thousands of articles written that raise doubts about climate science and anthropomorphic climate change. Or perhaps a political party could fund the publishing of fake stories of voter fraud to sow doubt over the legitimacy of an election. If the AI cannot distinguish between real and fake news, the threats these programs pose to democracy and meaningful debate between citizens become very apparent. Without proper protections, these chatbots could further exacerbate existing issues related to the spreading of misinformation. Governments should provide oversight over the development of these technologies to the public is protected from actors seeking to use the technology to manipulate users.

Most viewed

More news