25 Comments
User's avatar
Kishore-HUMANI's avatar

But the question arises is why Ai hallucinates? Not one particular AI LLM model but all of them and all versions?

if we are to Police the output of these AI models for every piece of content the AI generates for publication we end up wasting time instead of saving time which is key value proposition that the use AI offers as a promise.

Expand full comment
Joel Salinas's avatar

Kishore, for me, it just goes back to these being prediction machines, they predict the next work in a sequence. There’s much more work to be done, though perplexity is getting very close

Expand full comment
Suhrab Khan's avatar

AI doesn’t lie; it hallucinates confidently. The 3-2-1 Rule is a simple, practical framework leaders can use to catch errors before they damage trust. Must-read for anyone relying on AI in decision-making.

Expand full comment
Joel Salinas's avatar

Hallucinates confidently, so true!

Expand full comment
Hodman Murad's avatar

Incredible piece for any leader!

Expand full comment
Joel Salinas's avatar

Thanks, @Hodman!

Expand full comment
Dallas Payne's avatar

Such an important conversation. You've added a task to my list for today to build out a robust process for myself to use in AI interactions. It's easy to spot the "fun" hallucinations and they give us a laugh, but the hallucinations we don't spot easily are where the real issues lie.

Expand full comment
Joel Salinas's avatar

So true, glad it resonated!!

Expand full comment
Mia Kiraki 🎭's avatar

A brilliand and much needed post!

Expand full comment
Joel Salinas's avatar

Thanks, Mia!

Expand full comment
Melanie Goodman's avatar

I was literally having this discussion this morning - This post is a total gem for anyone using AI in high-stakes comms, which, let’s be honest, is most of us now. I do not feel I can rely on it for tge exact reasons you quote - reputation risk.

The “3-2-1 Rule” is clever because it’s simple without being simplistic. What stuck with me is the reminder that AI doesn’t have a reputation to lose - we do. And the bit about “confirmation hallucinations”? Brutal truth. Those are the ones that slip through because they flatter our biases.

Expand full comment
Joel Salinas's avatar

This was so encouraging to read! You and I think alike, losing credibility bc of AI hallucinations would be so easy if I'm not being careful, and yes, it does know how to massage our own self-esteem!

Expand full comment
Sam Illingworth's avatar

This is excellent Joel. Have you also shared this with educators? As I think this should be the framework of every AI literacy 101 course! Brilliant stuff. 🙏

Expand full comment
Joel Salinas's avatar

Any ideas on getting it in front of educators? haha I know, I would give anything to have a high school or college AI literacy course become mainstream

Expand full comment
Sam Illingworth's avatar

Prof in UK university here. 😅 seriously we should chat about this. I'll drop you a DM. 🙏

Expand full comment
Andrew Barban's avatar

Nice post, Joel. This really is a slippery slope. If we cannot trust what AI gives us, we cannot just question it, we have to verify. That means going to the source and seeing for ourselves. I know that defeats the point of using AI for speed, but right now, that is where we are. Maybe the answer is not to trust the stats or the facts, but to always ask for the source and verify it. That is still faster than hunting down the source completely on my own.

Expand full comment
Joel Salinas's avatar

Thanks! It really is tough, but this is a good example of how faster is not always better.

Expand full comment
Dennis Berry's avatar

Yes, there is so much information out there... we need to fact check always.

Expand full comment
John Brewton's avatar

Confidence without verification is how trust gets broken.

Expand full comment
Joel Salinas's avatar

100%! And that goes beyond ai into life

Expand full comment
John Brewton's avatar

Sure does my friend.

Expand full comment
Vishal Kataria's avatar

I don't the entire story clearly, but Air Canada probably didn't train its AI bot on its refund and other policies.

Along with facts, we need to train GenAI on grounding context (interpreting documents and responding to specific, real-world, or internal business data sources).

That's when we will see a fall in hallucinations. This dual reliability will make them trustworthy even in high-stakes decisions. And it will align them with user and business needs.

Expand full comment
Joel Salinas's avatar

That just seems like such an enormous challenge, because context is ever changing and hard to even gauge. I agree though, that’s the key

Expand full comment
Vishal Kataria's avatar

I agree, and that's why orgs are struggling. Their policies, processes, and guidelines are always changing, so even they don't know who they are lol. Then how will AI?

Expand full comment
Joel Salinas's avatar

haha exactly!!

Expand full comment