How to catch ChatGPT lying 🙊
5 tips for catching ChatGPT or Bard hallucinations.
Hey GPT hackers, first and foremost - Happy 4th of July 🇺🇸 to all our US readers. We’ll be celebrating with our neighbors and friends, but I didn't want to let the day pass without sharing our Tuesday article with all of you, no matter where you're tuning in from.
Today I’m covering the topic of AI hallucinations - when ChatGPT and Bard “lie” to you. Know how to spot it and how to prevent it altogether.
Here we go.
A note about our sponsor ❤️
Level up your note-taking game with Notion AI!
Notion AI gives you a world of efficiency - super organization, easy publishing, and quick access to your ideas. Plus, it seamlessly integrates AI within the app, so there is no more copy-pasting back and forth to ChatGPT or Bard.
Using ChatGPT's AI model in the backend, Notion AI makes it a breeze for those familiar with the tool too. Give it a try!
🤥 AI hallucinations are becoming a bigger problem
ChatGPT and Bard are both considered experiments, but their usage is already widespread! Companies like OpenAI, Microsoft, and Google, along with others, won’t let the early stage of these tools slow their remarkable growth. Companies are integrating AI in all sorts of ways which expands their reach even more. Just recently, we got:
Understanding the limitations and risks is crucial. A good starting point? Make sure that AI models do not deceive you with made-up information.
Remember, at this point in time, Generative AI is designed to produce human-like, believable output. It prioritizes clarity over absolute accuracy. We’ve seen several news articles on this issue, including these notable ones:
🙊 Why do AI models hallucinate (lie)?
Contrary to popular belief, AI models like ChatGPT don’t intentionally lie - at least not in the conventional human sense of telling a deliberate falsehood. It’s more of a “generative error.”
Sometimes AI models generate outputs that seem unusual, unexpected, misleading, or just plain inaccurate. This phenomenon is often referred to as a "hallucination" in the context of AI models. This happens as a result of the model's training process and the underlying complexities of the data it was trained on. (We tend to see more hallucinations when the AI model is being creative, which is influenced by its temperature setting.)
The shortcomings of ChatGPT, like its inability to access real-time data (a problem being worked on with the Bing plugin) and lack of access to specific databases or documents (an issue being fixed), add to the difficulty of its generated information, especially when asked for current or specific data. In other words, the more complex the question, especially when it pushes the AI’s limitations, the higher the likelihood of hallucinations.
Although it may seem as if ChatGPT is deliberately misleading, it is important to remember that these “hallucinations” are not a sign of true consciousness. These are due to how the model handles and understands data, often showing its training limitations.
All that to say - you’ve got to be able to spot the hallucination so you don’t use bad information.
PS: I can write an entire article dedicated to this section. I’ll stop here for now, but if you want an even deeper explanation of this, hit reply and let me know.
🕵️♂️ 5 tips for catching ChatGPT or Bard hallucinations
Now that we covered why AI lies are a problem and why it happens let’s go through our 5 tips for catching ChatGPT or Bard hallucinations:
1. Ask for a source
The simplest way to catch an AI hallucination is to start by asking for sources, authors, and names for any facts presented in the answer.
Prompt example: “Give me the source of the [insert fact here] you presented in the last answer.”
2. Ask for clarifications
Tom Mitchell, a leading AI research and machine learning professor, suggests that asking AI for clarifications on responses is a way to confirm uncertainties. You are forcing the AI to give you more details which will either give you the details because the facts are true or tell you that it can’t find any details about the topic, which means it was most likely made up.
Prompt: "Can you give me another example about [insert fact here]."
3. Ask the same question in different ways
Another great way to confirm the information you are getting is accurate is by asking the same question but in a different way to see if the answer stays the same.
To demonstrate this, take these two very simple questions:
What is the relationship between social media marketing and sales increase?
How does a robust social media presence influence a company's revenue?"
Compare the answers you get for both questions and assess the reliability of the information. If the answers are too different, you may be dealing with hallucinations. If the responses are consistent and based on known business principles or data, then it's likely the AI is giving you accurate information.
This one is harder to do, but it is important. Always verify important information provided by ChatGPT or Bard, especially if it involves real-time data, personal details, or critical business decisions.
There is no prompt for this one because the idea is to verify it elsewhere by doing a bit of research on Google or by reaching out to experts in the field you are working in.
5. Use clear and specific prompts
The precision and detail of your prompt greatly affect how accurate ChatGPT's reply will be - making your request clear helps the AI produce more valuable answers. For example, rather than asking for today's stock price (which the AI can't give), ask about elements that affect stock prices or economic concepts tied to stock market trends.
Prompt: "Describe the factors that can influence a company's stock price."
🔁 A quick note on the feedback loop
Both ChatGPT and Bard (and most AI models) have a way of giving positive (thumbs up) or negative (thumbs down) feedback to an AI model. While this won’t let you verify a response you just received, it’s important to give this feedback to the developers of the AI, as this feedback loop plays a critical role in improving the AI’s reliability and accuracy over time.
To make the most out of AI systems and avoid AI hallucination, remember these handy tips. Don't forget to give feedback on the answers you receive, as it helps the AI systems improve with time and usage. By actively participating, you contribute to the system's enhancement and enjoy a better experience overall.
Until next time! ✌️
🛠️ More hacks to crush your goals
Photo AI - upload a few selfies and create various AI-generated photoshoots (sponsored).
Gist - all-in-one customer service, engagement, and marketing tool. This is the one I used for Money Minx (sponsored).
Insights Analyst by Octane AI - upload a copy of all of your product reviews, find patterns, and ask questions to reveal insights (sponsored).
Your tool here - sponsor GPT Hacks and get your product in front of 2,000+ entrepreneurs and leaders.
🔥 Feature your product on GPT Hacks
GPT Hacks is a fast-growing newsletter with over 2,000+ startup founders, business owners, and tech-savvy pros looking for new ways to leverage AI to crush their goals. Click on the button below to learn more.
⏮️ Catch up with articles from our archives:
How was today’s tip?
Rate today’s tip to help us make GPT Hacks more useful for you 🚀