45 Comments
User's avatar
Wyndo's avatar

Love the tips Joel!

First rule of using AI: Don’t trust their first responses.

Second rule of using AI: Learn to push back.

Simple :)

Expand full comment
Joel Salinas's avatar

That's right!

Expand full comment
Natalia Cote-Munoz's avatar

Confirm-AI-tion—nice!! One tactic that has worked for me is that if I asked an LLM for feedback I say it’s for someone else, not for me. Then it’s much less sycophantic. Eg, instead of “provide feedback to my draft,” “help me critique [made up name’s] draft.”

Expand full comment
Joel Salinas's avatar

That's smart! Haha it tries to hard to make us like it as if we would fire it. ...or it knows humans are so hungry for praise

Expand full comment
Sharyph's avatar

this is cool, I like the idea here.

Expand full comment
Joel Salinas's avatar

Thanks, Sharyph!

Expand full comment
Melanie Goodman's avatar

That dog image had really stuck with me! I can picture it perfectly. It’s such a sharp way of showing how easy it is to slip into asking AI to “fetch” what we already believe, rather than testing our ideas. I’ve seen this a lot with professionals who end up building echo chambers instead of real strategies and something I help people avoid on LinkedIn

According to Gartner, 85% of AI projects are predicted to deliver erroneous outcomes through 2025 due to bias in data, prompts, or interpretation (https://www.gartner.com/en/newsroom).

When you caught yourself falling into that “confirmation loop,” what did you do differently to break out of it?

Expand full comment
Joel Salinas's avatar

Thanks for sharing! The one time when I was clearly bringing up a stupid idea and AI told me it was great is when I knew something had to change haha

I started asking AI to find reasons that what I’m doing may not work, or fail

Expand full comment
Bechem Ayuk's avatar

Thank you so much for this Joel. What I find most interesting is how you've identified that the real issue isn't AI bias, but bias amplification. It's like having a mirror that not only reflects back what you want to see, but also makes it look more polished and credible. The "False Expertise Syndrome" you describe is particularly insidious because it feels like rigorous research when you're doing it.

But if we're explicitly programming AI to challenge us, are we actually developing better critical thinking skills, or are we just outsourcing our devil's advocate function, too? Like, does teaching AI to disagree with us make us better thinkers, or does it just make us better at managing AI disagreement?

I'm genuinely curious because I think this might be the next level challenge... ensuring that these AI interactions are building our cognitive muscles rather than just creating more sophisticated echo chambers with built-in contrarian features. What's your take on that?

Expand full comment
Stephen's avatar

I ask my wife for a comment

Expand full comment
Joel Salinas's avatar

That’s awesome 😂

Expand full comment
Daniel Abreu Marques's avatar

Great article! Will try out the prompt immediately

Expand full comment
Joel Salinas's avatar

Thanks, Daniel!

Expand full comment
Tope Olofin's avatar

Love the play on words!

Expand full comment
Jitin Kapila's avatar

Whenever AI plans or tell something I ask "What is th rhe Devil's Advocate perspective on this" it brings some sense back. Same thing I do in my Kernels as well.

One thing to notice is this problem in more in ChatGPT than in Claude and Gemini. May it's just for me.

Expand full comment
Joel Salinas's avatar

Jitin, that’s great! I’ve seen that too, which is why I migrated most of my workflow to Claude

Expand full comment
Kurt Schmitt's avatar

I'm a little late to this party, but I liked this concept and do something similar, so thought I'd chime in. I created a custom GPT for ChatGPT that acts as a "critical thinker" that I can feed any document or text to and it will pick it apart, point-by-point. I might try the "skin-in-the-game" idea and see if that gives it more bite. I'm happy to share the prompt I use if anyone is interested.

Expand full comment
Joel Salinas's avatar

That's a genius idea, Kurt, thanks for sharing! I may try it

Expand full comment
Ash Stuart's avatar

I find a few things useful to mitigate such LLM tendencies

- Provide the same initial prompt to the same LLM in more than one, separate conversations (ie with no context overlap)

- Do this with slightly modified initial prompt with slightly different behavioral clues (like the devil's advocate you mention, but a spectrum of such)

- Do this with different LLMs / Chatbots.

The responses can be revealing.

Expand full comment
Joel Salinas's avatar

That’s a very interested approach! Thanks for sharing, I’ll try it

Expand full comment
Yvette Ward's avatar

This one "Be the skeptical reader who disagrees with me. What would they say?"

I'd learned a while back to challenge all the hype feedback from AI.

Thank you...I just met you this week on the podcast with Unplugged by Yana G.Y. Thanks for the tips you shared there also.

Expand full comment
Joel Salinas's avatar

Very cool! Glad to kind of officially meet you! :) Here to help if you need anything

Expand full comment
Yvette Ward's avatar

Yes, you helped that day as well. I asked the question whether you made your artifacts public on Claude. Thanks for sharing your resource (Futureproof Yourself) during the live.

Expand full comment
Joel Salinas's avatar

For sure! I have some free public ones on some posts and then my main 6 tools are on my premium hub page for paid members

Expand full comment
Chief Absurdist Officer's avatar

This is an important one, and I love the dialogue you've started here. Keep going.

Expand full comment
Joel Salinas's avatar

Thanks! Appreciate the encouragement :)

Expand full comment
Rita Previdi's avatar

Thank you for the mention Joel! I recently attended a course on Critical thinking with AI and I'm writing an article on that. Will be mentioning you there as well for sure :)

Expand full comment
Colette Molteni's avatar

Joel, this is such a sharp and necessary callout. You’ve reframed AI not as a productivity shortcut, but as a mirror, one that either reflects our blind spots or sharpens our thinking, depending on how we engage. I especially loved the community prompts; they don’t just invite better answers, they encourage deeper leadership. Emotional intelligence doesn’t get sidelined in AI conversations; it becomes the filter through which strategy and self-awareness meet. Teaching AI to challenge us might be the most underutilized skill in this new era, and you’ve just made the case for why it matters so clearly.

Expand full comment
Joel Salinas's avatar

Thank you, Colette! So glad it resonated, thank you for sharing

Expand full comment
Chintan Zalani's avatar

I have suffered from the negative bias. I made it my devil's advocate, and it played very strongly into bursting my ass. At some point, it really hurt. When I tried pushing back, it just threw the other side of the coin for me. At that point, I just realized, damn, I can't rely on this for most things it says bluntly haha. Thanks for showing us this other side of AI, Joel.

I love the graphic too :)

Expand full comment
Joel Salinas's avatar

This made me laugh out loud haha! never thought it would go too far on the other side and make us feel dumb

Expand full comment