27 Comments
User's avatar
Brice Barrett's avatar

This is a necessary distinction, Joel and Sam. The shift from functional to critical literacy is essentially the reclamation of the Reflection Gap. We have spent the last decade optimizing for speed, but as you’ve pointed out with the Air Canada case, speed without 'Sovereign Interrogation' creates massive systemic liability. Developing a 'BS Detector' isn’t just a leadership skill; it’s a survival protocol for maintaining human agency. Without that critical pause—what I call the Discipline of Slowness—we aren't leading AI; we are just accelerating the error rate of our organizations. Thank you for this framework.

Joel Salinas's avatar

Thank you Brice! Accelerating the error rate of orgs is a great line

Brice Barrett's avatar

Appreciated, Joel. It’s a systemic risk we often overlook. When we treat AI as an autonomous pilot rather than a high-speed tool, we aren't just scaling efficiency —we're scaling the speed at which small misalignments become catastrophic failures. Critical literacy is the only brake system we have left. Glad the line resonated.

Dr Sam Illingworth's avatar

Thank you Brice. I honestly think this level of critical AI literacy should become compulsory training for anyone making decisions around the use of AI. 🙏

Brice Barrett's avatar

This is the 'Mirror Trap' of the machine, Sam. By reflecting us back to ourselves, AI removes the friction required for real insight. We trade depth for a frictionless user experience, and in doing so, we lose our blind spots. Maintaining cognitive sovereignty requires looking past that hollow reflection.

Wilbert Kramer's avatar

When not to use AI, or only with light assistance, is the key. I found this to be true as a marketer working for an AI consultancy firm (drowned in 10x this and that, and AI-first approaches).

I pledge for human first, AI second, human third (verify output).

Dr Sam Illingworth's avatar

This is a fantastic process Wilbert. I wish all AI users were so critical in their approach. 🙏

Shamim Rajani's avatar

Good read as always!

I’ve seen teams produce impressive outputs with AI, only for small blind spots to turn into big problems because no one paused to interrogate the results. Leaders who develop this muscle, who slow down, ask uncomfortable questions, and stay accountable, will be the ones who ride the wave of AI rather than being swept away by it.

Joel Salinas's avatar

Such an important point, Shamim!

Recursive Intelligence's avatar

This is exactly what I’m writing about. Practical skills and methods I’ve tested to use AI critically. It’s not just about better prompting or context engineering. It’s about the cognitive scaffolding that allows you to guide and constrain LLM outputs in reliable and repeatable ways. LLM outputs shouldn’t replace your own, but augment it in scalable ways that seem like you have cognitive superpowers.

Joel Salinas's avatar

Awesome!! I’ll check out your work

Dr Sam Illingworth's avatar

Thank you. Looking forward to reading. 🙏

Hannes Depuydt's avatar

This is great advice, not just for leaders but for anyone working with AI. I'm definitely guilty of trusting AI output a little too much from time to time. I'm slowly trying to integrate AI solutions where I work, and these posts are invaluable. Implementing it slowly and smartly will result in a much greater benefit than doing it quick and dirty. It's an uncomfortable truth, because it's not sexy, and does not just apply to AI...

Dr Sam Illingworth's avatar

Very true Hannes. 🙏

Dan Cucolea's avatar

Really good piece that applies to vibe coding as well! You need to know at least some coding basics before you start creating medium to complex apps with AI. I'm talking about payment integrations and user management systems. People take for granted AI written code without understanding a single things and then it can all go downhill.

Joel Salinas's avatar

Great point, Dan!

Dr Sam Illingworth's avatar

Thanks so much Dan.

Mark S. Carroll ✅'s avatar

This hits for me because it speaks to the real failure mode leaders keep tripping over: we trained people to use AI, not to interrogate it.

The distinction between functional literacy and critical literacy is the missing muscle. Provenance, power dynamics, and fragility is a framework executives can actually remember and apply, especially when the output “sounds right” and everyone wants to move fast. The Air Canada example is a clean reminder that delegation to AI does not equal delegation of responsibility.

What I especially appreciate here is the insistence on friction. Slowing the system just enough to let human judgment re-enter the loop is not anti-AI; it’s pro-governance. Prompting the model to surface its own assumptions and failure modes is a practical move leaders can adopt tomorrow, not a philosophical stance.

Dr Sam Illingworth's avatar

Thanks Mark. You've done a brilliant job of pulling out what the key messages were that I was trying to communicate with this post. I really hope lots of leaders read this and think about how they can improve their critical literacy rather than just their functional literacy. Of course if they want to reach out to me or Joel for consultancy work, we'd be more than happy 😉

Neural Foundry's avatar

Honestly this is one of the best framings I've seen on AI adoption. The 'BS Detector' framing especially resonates after watching teams at my org trust outputs way too fast and then scramble when things broke. The 3 checks feel practical enough to actually use in meetings. What I hadnt considered is how developing this literacy might also help non-technical leaders finally feel emmpowered to push back on AI hype.

Dr Sam Illingworth's avatar

Thank you! And yes absolutely, effective training in critical AI literacy enables leaders to do just that. 🙏

Jessica Drapluk's avatar

Another amazing work of collaboration!!!

Dr Sam Illingworth's avatar

Thanks Jess! @Joel Salinas is the 🐐 of genuine community building. 10/10 would recommend. 🙏

Jessica Drapluk's avatar

Thanks, Sam! Just subscribed to Joel’s publication! 🙌🏼🤝

Karo (Product with Attitude)'s avatar

Another great collaboration!

Dr Sam Illingworth's avatar

Thank you Karo! 🙏