But the question arises is why Ai hallucinates? Not one particular AI LLM model but all of them and all versions?
if we are to Police the output of these AI models for every piece of content the AI generates for publication we end up wasting time instead of saving time which is key value proposition that the use AI offers as a promise.
Kishore, for me, it just goes back to these being prediction machines, they predict the next work in a sequence. There’s much more work to be done, though perplexity is getting very close
AI doesn’t lie; it hallucinates confidently. The 3-2-1 Rule is a simple, practical framework leaders can use to catch errors before they damage trust. Must-read for anyone relying on AI in decision-making.
Such an important conversation. You've added a task to my list for today to build out a robust process for myself to use in AI interactions. It's easy to spot the "fun" hallucinations and they give us a laugh, but the hallucinations we don't spot easily are where the real issues lie.
I was literally having this discussion this morning - This post is a total gem for anyone using AI in high-stakes comms, which, let’s be honest, is most of us now. I do not feel I can rely on it for tge exact reasons you quote - reputation risk.
The “3-2-1 Rule” is clever because it’s simple without being simplistic. What stuck with me is the reminder that AI doesn’t have a reputation to lose - we do. And the bit about “confirmation hallucinations”? Brutal truth. Those are the ones that slip through because they flatter our biases.
This was so encouraging to read! You and I think alike, losing credibility bc of AI hallucinations would be so easy if I'm not being careful, and yes, it does know how to massage our own self-esteem!
This is excellent Joel. Have you also shared this with educators? As I think this should be the framework of every AI literacy 101 course! Brilliant stuff. 🙏
Any ideas on getting it in front of educators? haha I know, I would give anything to have a high school or college AI literacy course become mainstream
Nice post, Joel. This really is a slippery slope. If we cannot trust what AI gives us, we cannot just question it, we have to verify. That means going to the source and seeing for ourselves. I know that defeats the point of using AI for speed, but right now, that is where we are. Maybe the answer is not to trust the stats or the facts, but to always ask for the source and verify it. That is still faster than hunting down the source completely on my own.
I don't the entire story clearly, but Air Canada probably didn't train its AI bot on its refund and other policies.
Along with facts, we need to train GenAI on grounding context (interpreting documents and responding to specific, real-world, or internal business data sources).
That's when we will see a fall in hallucinations. This dual reliability will make them trustworthy even in high-stakes decisions. And it will align them with user and business needs.
I agree, and that's why orgs are struggling. Their policies, processes, and guidelines are always changing, so even they don't know who they are lol. Then how will AI?
But the question arises is why Ai hallucinates? Not one particular AI LLM model but all of them and all versions?
if we are to Police the output of these AI models for every piece of content the AI generates for publication we end up wasting time instead of saving time which is key value proposition that the use AI offers as a promise.
Kishore, for me, it just goes back to these being prediction machines, they predict the next work in a sequence. There’s much more work to be done, though perplexity is getting very close
AI doesn’t lie; it hallucinates confidently. The 3-2-1 Rule is a simple, practical framework leaders can use to catch errors before they damage trust. Must-read for anyone relying on AI in decision-making.
Hallucinates confidently, so true!
Incredible piece for any leader!
Thanks, @Hodman!
Such an important conversation. You've added a task to my list for today to build out a robust process for myself to use in AI interactions. It's easy to spot the "fun" hallucinations and they give us a laugh, but the hallucinations we don't spot easily are where the real issues lie.
So true, glad it resonated!!
A brilliand and much needed post!
Thanks, Mia!
I was literally having this discussion this morning - This post is a total gem for anyone using AI in high-stakes comms, which, let’s be honest, is most of us now. I do not feel I can rely on it for tge exact reasons you quote - reputation risk.
The “3-2-1 Rule” is clever because it’s simple without being simplistic. What stuck with me is the reminder that AI doesn’t have a reputation to lose - we do. And the bit about “confirmation hallucinations”? Brutal truth. Those are the ones that slip through because they flatter our biases.
This was so encouraging to read! You and I think alike, losing credibility bc of AI hallucinations would be so easy if I'm not being careful, and yes, it does know how to massage our own self-esteem!
This is excellent Joel. Have you also shared this with educators? As I think this should be the framework of every AI literacy 101 course! Brilliant stuff. 🙏
Any ideas on getting it in front of educators? haha I know, I would give anything to have a high school or college AI literacy course become mainstream
Prof in UK university here. 😅 seriously we should chat about this. I'll drop you a DM. 🙏
Nice post, Joel. This really is a slippery slope. If we cannot trust what AI gives us, we cannot just question it, we have to verify. That means going to the source and seeing for ourselves. I know that defeats the point of using AI for speed, but right now, that is where we are. Maybe the answer is not to trust the stats or the facts, but to always ask for the source and verify it. That is still faster than hunting down the source completely on my own.
Thanks! It really is tough, but this is a good example of how faster is not always better.
Yes, there is so much information out there... we need to fact check always.
Confidence without verification is how trust gets broken.
100%! And that goes beyond ai into life
Sure does my friend.
I don't the entire story clearly, but Air Canada probably didn't train its AI bot on its refund and other policies.
Along with facts, we need to train GenAI on grounding context (interpreting documents and responding to specific, real-world, or internal business data sources).
That's when we will see a fall in hallucinations. This dual reliability will make them trustworthy even in high-stakes decisions. And it will align them with user and business needs.
That just seems like such an enormous challenge, because context is ever changing and hard to even gauge. I agree though, that’s the key
I agree, and that's why orgs are struggling. Their policies, processes, and guidelines are always changing, so even they don't know who they are lol. Then how will AI?
haha exactly!!