AI in action

– 9 min read

AI hallucinations: How to reduce inaccurate content outputs

Alaura Weaver

Bet you thought hallucinations were reserved for humans. Well, not anymore. AI models can also be prone to their own kinds of hallucinations: generating nonsense or inventing things out of thin air.

While generative AI offers an immense opportunity for businesses seeking to increase productivity and content output, AI hallucinations can seriously damage a brand’s reputation and content quality.

By understanding what AI hallucinations are and what your company can do to avoid any hallucinations, will put you on the right track to creating content that supports your brand.

Summarized by Writer

  • AI hallucinations are when large language models generate an incorrect, nonsensical, or irrelevant answer.
  • Causes of AI hallucinations include bias, hate speech, and misinformation on the internet that AI absorbs.
  • Risks associated with AI hallucinations include loss of trust and authority, damaging bias, ethical issues, and legal implications.
  • Spotting AI hallucinations requires not trusting anything, introducing quality prompts, including relevant data, verifying claims, and double-checking.
  • Avoiding AI hallucinations requires choosing the right AI tools, providing support to the team, and investing in an enterprise solution.

What are “AI hallucinations?”

AI hallucinations is the term coined when large language models generate an incorrect, nonsensical, or irrelevant answer. In other words, it just makes up an answer.

AI hallucinations manifest as misinterpreted data, made-up answers, or lies about a topic.

Probably the most famous AI hallucination to date was the hiccup at Google’s Bard launch. Where Bart incorrectly answered a question in their demo. This hallucination mistake cost Google a loss of authority in the AI space and about USD100 million in shares (Yikes).

What causes AI hallucinations anyway?

To understand why AI hallucinates, we need to take a few steps back to understand how generative AI comes up with answers in the first place.

Generative AI like ChatGPT, Bard, and Writer use Natural Language Processors (NLP) and Machine Learning ( ML) to create new content.

This means that generative AI models use ML to identify patterns in data, allowing them to predict the next data sequence. And AI uses NLP and large language models to understand context, queries, common crawls, and the internet. But AI systems and models can’t necessarily parse that data for accuracy themselves — and that’s where hallucinations can spark.

“That world [of language] gives [AI systems] some clues about what is true and what is not true, but the language they learn from is not grounded in reality. They do not necessarily know if what they are generating is true or false,” said Melanie Mitchell, an AI researcher at the Santa Fe Institute, in The New York Times.

Data scientists aren’t completely sure what causes large language models to hallucinate in these cases. Some believe that limited training data may be a factor, causing these systems to generate a realistic — even if incorrect — answer.

“It’s saying something very different from reality, but it doesn’t know it’s different from reality as opposed to a lie,” said Pierre Haren, CEO and co-founder of Causality Link, a software development company, to Work Life News.

Risks associated with AI hallucinations

Although the hype of AI is everywhere, understanding the risks associated with AI hallucinations is important. You need to consider how your team can come up with appropriate measures and protocols to prevent any embarrassing -or costly- hallucination mistakes.

  1. Loss of trust and authority: any company that spreads false information risks losing the brand authority they’ve built over the years, as customers won’t trust their resources. Gartner predicts that in just a few short years, 80% of marketers will face authenticity issues with their content.
  2. Damaging bias: AI hallucinations that spread bias has devastating consequences on minorities and underrepresented groups that are affected by biased AI algorithms. AI models have been shown to skew results and outputs in the past; for instance, by approving lower credit card limits for women than for their husbands.
  3. Negative societal impact: Hallucinations can lead you to publish content that contributes to misinformation and public discourse with inaccurate, polarizing information. Back in 2020, a group of researchers found that GPT-3 could be prompted to produce speech and opinions similar to mass shooters, QAnon threads, and more.
  4. Poor content: If you let hallucinations go unchecked, you risk adding more nonsensical, garbage content to the fold, which will eventually serve as training data for future AI language models. The concept of training AI systems on incomplete data is known as “garbage in, garbage out” and has significant real-world consequences. Word embedding, for instance, has been found to characterize European American names as pleasant and African American names as unpleasant.
  5. Personal harm to consumers: Hallucinations can have serious ethical implications for consumers. In fields like healthcare, financial services, or law, people rely on trustworthy information from professionals. Publishing false or misleading info instead can damage their wellbeing and livelihood. A lawyer in New York is facing sanctions after citing fake cases generated by ChatGPT in a federal legal brief.
  6. Legal implications: companies and organizations can be held liable for any false or inaccurate information they spread. One radio host in Georgia is suing OpenAI for defamation after ChatGPT generated false information about him. The platform said that he’d been accused of defrauding and embezzling funds from a nonprofit, following a query from a journalist.

The risk of AI hallucinations is costly. It’s normal that companies want to preserve their brand reputation and authority by keeping their content as authoritative and factual as possible.

That’s why it’s important to understand how to spot AI hallucinations and what procedures to implement to minimize risk.

How to tackle AI hallucinations

AI hallucinations, though often subtle, can be difficult to spot. Your first rule of thumb is: Don’t trust anything.

Although that might come across as harsh, the reality is everything AI generates needs to be taken with a pinch of salt and fact-checked.

As you board the AI train, let’s equip you with the tools you need to safely maneuver through AI hallucinations and reach your destination: quality content land.

Become an AI prompt pro

The first step to avoiding AI hallucinations is introducing quality prompts.

When creating prompts, two things stick out: Be specific and purposeful.

Your prompt should be as specific as possible, meaning avoid open-ended questions. This helps the model fill in gaps with information that it might otherwise invent on its own. Consider the purpose of the prompt, so you can ask the AI exactly what you need, and then provide it with reference materials right at the start. For instance, in Writer, you can provide a URL to any company webpage that you want to reference.

It’s good practice to include all of the details and parameters you want AI to follow to prevent nonsensical answers.

Instead of “write an email asking customers to sign up to Writer,” which is very vague, opt for ” I’m an email marketer at Writer, a generative AI platform for enterprise companies. I need to get our customer signed up for a Team plan. Write an engaging and personal email to get customers to sign up. Offer them a 10% discount on their first month. Avoid generic and spam phrases. Use a friendly, fun, and educational voice.”

This informs AI platforms about the purpose, the context, and what type of writing style they need to follow.

Feeling overwhelmed? Don’t worry, we’ve got a handy guide to get you started: Prompt crafting: AI writing prompts for any marketing task

Include relevant data

When working with generative AI, always include relevant data to help with an output. The goal is to provide context so AI platforms don’t end up creating their own filler and risk creating hallucinations.

Relevant data could be background, role title, statistics, research, purpose, or information on the audience. That way, AI gets a more narrow understanding of their task instead of generating a convincing (but made-up) output.

Verify claims, fact-check, and then double-check

Double-check everything. Generative AI is a great tool to streamline content creation, but you need someone responsible to ensure the content’s quality is up to par and free from hallucinations.

The moment you see a claim, research and fact check it to make sure it’s correct. Sometimes the actual data might be correct, but it’s out of context and ends up being misleading.

Any fact that’s generated needs to be verified. Any conclusion that’s made needs to be double-checked to make sure it’s free from bias.

The biggest mistake is assuming that content created from AI is free from faults. Always verify the source, facts, and claims even if it looks trustworthy and convincing.

Choose your tools wisely

Choose the generative AI tools with hallucinations and quality in mind. You want to make sure you provide your team with tools that help them spot hallucinations quickly.

For example, Writer has a claim detection tool that flags any claims, facts, or quotes for a human to verify. And you can use live URLs to train Writer to provide more quality content that’s free from hallucinations.

Claim detection feature in Writer
With the claim detection feature on, Writer will prompt users to verify claims that need double-checking.

Writer also built Knowledge Graph, which serves as a source of truth by pulling from all of your company’s data. It can connect to your company’s wiki, cloud storage, knowledge bases, and more to automatically verify that your content is accurate and aligned with your data. 

Knowledge graph in Writer
Writer will ensure your content is in line with the information stored in your company’s knowledge graph.

A word of caution: Your content is only as good as the team behind it. You need to support your team with the tools, resources, and knowledge to tackle AI to create quality pieces.

Responsible development is your best tool to wield against AI hallucinations

Generative AI has the power to skyrocket a company’s content creation, streamline processes, and build brand awareness. But the quality of that content needs to be fantastic and authentic. The quality of that content can be greatly improved through authenticity — and an AI partner that prioritizes responsible AI development and governance.

When it comes to using AI for content creation, the generative AI platform you choose really makes all the difference. It’s worth taking time and investing in an enterprise solution that offers quality data sets, security, and claim-checks and features that allow you to personalize your content, such as audio and PDF inputs.

If you’re interested in an enterprise-grade solution that supports your content team needs with secure and customizable features, take a look at how Writer could be the right solution for you.