Back to Blog
EducationMarch 8, 20267 min read

AI Chatbot Hallucinations: What They Are and How to Catch Them

By BadBots.ai Team

AI Chatbot Hallucinations: What They Are and How to Catch Them

Your bot just told a customer you offer free consultations. You don't. It quoted a price of $199 for a service that costs $350. It described a "satisfaction guarantee" that doesn't exist anywhere in your client's policies. That's a hallucination — and if you're running GHL bots for clients, it's probably happening right now.

AI hallucination is when a chatbot generates information that sounds plausible and confident but has no basis in its actual knowledge base. The bot isn't lying. It doesn't know the difference between what it was trained on and what it's inventing. And that's exactly what makes it dangerous.

Why GHL Bots Hallucinate

GHL's Conversation AI is powered by large language models. These models are designed to be helpful — they'd rather give an answer than say "I don't know." That design choice, combined with how knowledge bases work in GHL, creates a perfect setup for hallucinations.

Gaps in the knowledge base. When a customer asks about something that isn't explicitly covered in the KB, the model doesn't stop and say "that information isn't available." It fills in the gap with its best guess based on context. A dental practice bot without explicit pricing in its KB might quote industry-average prices. A med spa bot without a cancellation policy might generate one that sounds reasonable but is completely fabricated.

Outdated KB content. Promotions expire. Prices change. Staff leaves. If the KB hasn't been updated, the bot will confidently reference last quarter's holiday special or mention a practitioner who left six months ago. Technically it's pulling from the KB, but the information is wrong.

Ambiguous instructions. If the bot's instructions say "help customers with booking" but don't define what to do when no slots are available, the bot improvises. It might tell the customer "we have availability tomorrow at 2 PM" when it has no access to the actual calendar. The instruction gap becomes a hallucination gap.

Cross-contamination. When agencies clone bot configurations across sub-accounts, KB content from one client can leak into another client's bot. The bot then references services, locations, or team members from the wrong business — and presents them as fact.

The Three Types of Hallucinations

Not all hallucinations are equal. Understanding the types helps you know where to look.

Type 1: Fabricated facts. The bot invents information that doesn't exist anywhere — a price, a policy, a feature. This is the most dangerous type because the information sounds authoritative. "Our Brazilian Blowout treatment is $175 and includes a free deep conditioning treatment." If neither the price nor the add-on are real, that's a pure fabrication.

Type 2: Extrapolated details. The bot takes real information and extends it beyond what's documented. The KB says "We offer facial treatments." The bot tells a customer "Our facial treatments typically take 60-90 minutes and include a relaxing scalp massage." The base fact is true (they offer facials), but every detail after that is invented.

Type 3: Confident ignorance. The bot presents uncertainty as certainty. Instead of saying "I'm not sure about that, let me connect you with someone who can help," it says "Yes, we can definitely accommodate that request" for something it has no information about. The hallucination isn't wrong information — it's false confidence.

How to Detect Hallucinations

You can't catch what you don't test for. Here's a systematic approach.

Run Negative Knowledge Tests

Ask the bot about things that are NOT in the knowledge base. This is the single most effective hallucination detection method.

  • Ask about services the business doesn't offer
  • Ask for specific pricing when the KB only has general information
  • Ask about policies that haven't been documented
  • Ask for details about team members who don't exist
  • Ask about locations that don't exist

If the bot answers any of these confidently instead of deferring, you've found a hallucination.

Compare Responses to KB Content Line by Line

For every response the bot gives about services, pricing, policies, or procedures, find the exact KB entry it should be pulling from. If you can't find a source for something the bot said, it's fabricated.

This is tedious to do manually, but it's the gold standard. Pull up the KB in one window and the bot's response in another. Every claim the bot makes should trace back to a specific piece of KB content.

Test with Paraphrased Questions

Ask the same question five different ways:

  • "How much does Botox cost?"
  • "What's the price for Botox?"
  • "What would I pay for a Botox treatment?"
  • "Can you give me Botox pricing?"
  • "Is Botox expensive at your clinic?"

If the bot gives different answers to the same question, at least some of those answers are hallucinated. Consistency testing reveals where the bot is pulling from the KB (consistent) versus guessing (inconsistent).

Check Temporal Accuracy

Ask about things that change over time:

  • "Do you have any specials running right now?"
  • "Is [former employee] available this week?"
  • "Can I use the [expired] coupon code?"

If the bot references outdated information as current, the KB needs updating and the bot needs instructions about how to handle time-sensitive content.

How to Prevent Hallucinations

Detection is step one. Prevention is the goal.

Write Explicit "Don't Know" Instructions

In the bot's instructions, add clear directives for handling gaps:

"If a customer asks about pricing that is not explicitly listed in your knowledge base, say: 'I want to make sure I give you the right price. Let me connect you with our team for current pricing.' Do not estimate or guess pricing."

"If a customer asks about services not in your knowledge base, say: 'I don't have information about that specific service. Would you like me to have someone from our team reach out to you?'"

These instructions need to be specific. Generic "don't make things up" directives are too vague for the model to follow reliably.

Keep the Knowledge Base Current

Set a recurring calendar reminder — monthly at minimum — to review and update every bot's KB. Check for:

  • Expired promotions
  • Changed pricing
  • Staff changes
  • Updated hours or locations
  • New services or discontinued offerings
  • Policy changes

An outdated KB is a hallucination generator.

Use Structured KB Formats

The more structured your KB content, the less room the bot has to improvise. Instead of narrative paragraphs, use clear formats:

Service: Brazilian Blowout
Price: $250
Duration: 2 hours
Description: Smoothing treatment for frizzy hair
Note: Price does not include gratuity

Structured content gives the bot specific data to reference rather than paragraphs to interpret.

Limit the Bot's Scope

In the bot's instructions, explicitly define what it should and shouldn't attempt to answer. "You are an appointment booking assistant for [Business Name]. You can answer questions about our services, pricing, and availability. For questions about medical advice, insurance, or complaints, direct the customer to call our office."

A narrower scope means fewer opportunities to hallucinate.

Making Hallucination Testing Routine

One-time testing isn't enough. Knowledge bases change, GHL updates its AI models, and new edge cases appear as real customers interact with bots. Hallucination testing needs to be part of your regular maintenance cycle.

BadBots.ai includes hallucination detection as a core part of every audit — negative knowledge tests, cross-reference checks, and consistency scoring across all channels. But whether you automate it or do it manually, the key is doing it regularly. A bot that was hallucination-free last month might not be today.

The bottom line: your bot will always try to be helpful. Your job is to make sure "helpful" and "accurate" mean the same thing.

Get more insights like this

Weekly tips on AI bot quality, GHL best practices, and agency growth.

Join the Waitlist