Confirm-AI-tion—nice!! One tactic that has worked for me is that if I asked an LLM for feedback I say it’s for someone else, not for me. Then it’s much less sycophantic. Eg, instead of “provide feedback to my draft,” “help me critique [made up name’s] draft.”
That dog image had really stuck with me! I can picture it perfectly. It’s such a sharp way of showing how easy it is to slip into asking AI to “fetch” what we already believe, rather than testing our ideas. I’ve seen this a lot with professionals who end up building echo chambers instead of real strategies and something I help people avoid on LinkedIn
According to Gartner, 85% of AI projects are predicted to deliver erroneous outcomes through 2025 due to bias in data, prompts, or interpretation (https://www.gartner.com/en/newsroom).
When you caught yourself falling into that “confirmation loop,” what did you do differently to break out of it?
Thank you so much for this Joel. What I find most interesting is how you've identified that the real issue isn't AI bias, but bias amplification. It's like having a mirror that not only reflects back what you want to see, but also makes it look more polished and credible. The "False Expertise Syndrome" you describe is particularly insidious because it feels like rigorous research when you're doing it.
But if we're explicitly programming AI to challenge us, are we actually developing better critical thinking skills, or are we just outsourcing our devil's advocate function, too? Like, does teaching AI to disagree with us make us better thinkers, or does it just make us better at managing AI disagreement?
I'm genuinely curious because I think this might be the next level challenge... ensuring that these AI interactions are building our cognitive muscles rather than just creating more sophisticated echo chambers with built-in contrarian features. What's your take on that?
Whenever AI plans or tell something I ask "What is th rhe Devil's Advocate perspective on this" it brings some sense back. Same thing I do in my Kernels as well.
One thing to notice is this problem in more in ChatGPT than in Claude and Gemini. May it's just for me.
I'm a little late to this party, but I liked this concept and do something similar, so thought I'd chime in. I created a custom GPT for ChatGPT that acts as a "critical thinker" that I can feed any document or text to and it will pick it apart, point-by-point. I might try the "skin-in-the-game" idea and see if that gives it more bite. I'm happy to share the prompt I use if anyone is interested.
I find a few things useful to mitigate such LLM tendencies
- Provide the same initial prompt to the same LLM in more than one, separate conversations (ie with no context overlap)
- Do this with slightly modified initial prompt with slightly different behavioral clues (like the devil's advocate you mention, but a spectrum of such)
Yes, you helped that day as well. I asked the question whether you made your artifacts public on Claude. Thanks for sharing your resource (Futureproof Yourself) during the live.
Thank you for the mention Joel! I recently attended a course on Critical thinking with AI and I'm writing an article on that. Will be mentioning you there as well for sure :)
Joel, this is such a sharp and necessary callout. You’ve reframed AI not as a productivity shortcut, but as a mirror, one that either reflects our blind spots or sharpens our thinking, depending on how we engage. I especially loved the community prompts; they don’t just invite better answers, they encourage deeper leadership. Emotional intelligence doesn’t get sidelined in AI conversations; it becomes the filter through which strategy and self-awareness meet. Teaching AI to challenge us might be the most underutilized skill in this new era, and you’ve just made the case for why it matters so clearly.
I have suffered from the negative bias. I made it my devil's advocate, and it played very strongly into bursting my ass. At some point, it really hurt. When I tried pushing back, it just threw the other side of the coin for me. At that point, I just realized, damn, I can't rely on this for most things it says bluntly haha. Thanks for showing us this other side of AI, Joel.
Love the tips Joel!
First rule of using AI: Don’t trust their first responses.
Second rule of using AI: Learn to push back.
Simple :)
That's right!
Confirm-AI-tion—nice!! One tactic that has worked for me is that if I asked an LLM for feedback I say it’s for someone else, not for me. Then it’s much less sycophantic. Eg, instead of “provide feedback to my draft,” “help me critique [made up name’s] draft.”
That's smart! Haha it tries to hard to make us like it as if we would fire it. ...or it knows humans are so hungry for praise
this is cool, I like the idea here.
Thanks, Sharyph!
That dog image had really stuck with me! I can picture it perfectly. It’s such a sharp way of showing how easy it is to slip into asking AI to “fetch” what we already believe, rather than testing our ideas. I’ve seen this a lot with professionals who end up building echo chambers instead of real strategies and something I help people avoid on LinkedIn
According to Gartner, 85% of AI projects are predicted to deliver erroneous outcomes through 2025 due to bias in data, prompts, or interpretation (https://www.gartner.com/en/newsroom).
When you caught yourself falling into that “confirmation loop,” what did you do differently to break out of it?
Thanks for sharing! The one time when I was clearly bringing up a stupid idea and AI told me it was great is when I knew something had to change haha
I started asking AI to find reasons that what I’m doing may not work, or fail
Thank you so much for this Joel. What I find most interesting is how you've identified that the real issue isn't AI bias, but bias amplification. It's like having a mirror that not only reflects back what you want to see, but also makes it look more polished and credible. The "False Expertise Syndrome" you describe is particularly insidious because it feels like rigorous research when you're doing it.
But if we're explicitly programming AI to challenge us, are we actually developing better critical thinking skills, or are we just outsourcing our devil's advocate function, too? Like, does teaching AI to disagree with us make us better thinkers, or does it just make us better at managing AI disagreement?
I'm genuinely curious because I think this might be the next level challenge... ensuring that these AI interactions are building our cognitive muscles rather than just creating more sophisticated echo chambers with built-in contrarian features. What's your take on that?
I ask my wife for a comment
That’s awesome 😂
Great article! Will try out the prompt immediately
Thanks, Daniel!
Love the play on words!
Whenever AI plans or tell something I ask "What is th rhe Devil's Advocate perspective on this" it brings some sense back. Same thing I do in my Kernels as well.
One thing to notice is this problem in more in ChatGPT than in Claude and Gemini. May it's just for me.
Jitin, that’s great! I’ve seen that too, which is why I migrated most of my workflow to Claude
I'm a little late to this party, but I liked this concept and do something similar, so thought I'd chime in. I created a custom GPT for ChatGPT that acts as a "critical thinker" that I can feed any document or text to and it will pick it apart, point-by-point. I might try the "skin-in-the-game" idea and see if that gives it more bite. I'm happy to share the prompt I use if anyone is interested.
That's a genius idea, Kurt, thanks for sharing! I may try it
I find a few things useful to mitigate such LLM tendencies
- Provide the same initial prompt to the same LLM in more than one, separate conversations (ie with no context overlap)
- Do this with slightly modified initial prompt with slightly different behavioral clues (like the devil's advocate you mention, but a spectrum of such)
- Do this with different LLMs / Chatbots.
The responses can be revealing.
That’s a very interested approach! Thanks for sharing, I’ll try it
This one "Be the skeptical reader who disagrees with me. What would they say?"
I'd learned a while back to challenge all the hype feedback from AI.
Thank you...I just met you this week on the podcast with Unplugged by Yana G.Y. Thanks for the tips you shared there also.
Very cool! Glad to kind of officially meet you! :) Here to help if you need anything
Yes, you helped that day as well. I asked the question whether you made your artifacts public on Claude. Thanks for sharing your resource (Futureproof Yourself) during the live.
For sure! I have some free public ones on some posts and then my main 6 tools are on my premium hub page for paid members
This is an important one, and I love the dialogue you've started here. Keep going.
Thanks! Appreciate the encouragement :)
Thank you for the mention Joel! I recently attended a course on Critical thinking with AI and I'm writing an article on that. Will be mentioning you there as well for sure :)
Joel, this is such a sharp and necessary callout. You’ve reframed AI not as a productivity shortcut, but as a mirror, one that either reflects our blind spots or sharpens our thinking, depending on how we engage. I especially loved the community prompts; they don’t just invite better answers, they encourage deeper leadership. Emotional intelligence doesn’t get sidelined in AI conversations; it becomes the filter through which strategy and self-awareness meet. Teaching AI to challenge us might be the most underutilized skill in this new era, and you’ve just made the case for why it matters so clearly.
Thank you, Colette! So glad it resonated, thank you for sharing
I have suffered from the negative bias. I made it my devil's advocate, and it played very strongly into bursting my ass. At some point, it really hurt. When I tried pushing back, it just threw the other side of the coin for me. At that point, I just realized, damn, I can't rely on this for most things it says bluntly haha. Thanks for showing us this other side of AI, Joel.
I love the graphic too :)
This made me laugh out loud haha! never thought it would go too far on the other side and make us feel dumb