17 Comments
User's avatar
Alex Randall Kittredge's avatar

Compatibility-first is such a sharp lens here. The distinction between utility doubt and identity threat perfectly explains why “more training” often just produces performative adoption and pilot theater instead of real behavior change.

Joel Salinas's avatar

exactly! Very well said!

Anna | how to boss AI's avatar

Thanks for pointing that out, Alex. That’s exactly what harms adoption, especially in the performance-driven environments with not enough emotional safety provided.

Fernando Vago Santana's avatar

Instead of competing with technology, professionals should use it to become even more efficient

Joel Salinas's avatar

Very well said!

Dallas Payne's avatar

A great discussion here, Anna and Joel! Loved trying the tool out too.

Joel Salinas's avatar

Thanks, Dallas!!

Mary Morton's avatar

Thank you for the article. My thinking, it's not just about identity threat and an unwillingness to let go. My experience with AI is that there is a need for human-in-the-loop oversight. Freely giving leadership agency to AI is a very real risk. And this may be the root of some push-back from a leader's perspective.

Consider it's like replacing a seasoned professional with a recent college grad. You're not going to get the same quality. Is it good for the recent college grad to assist the seasoned professional? Absolutely. But the seasoned professional isn't going to carte-blanche allow the work of that individual to go out of their department without oversight. Same is true for AI employee implementations.

Anna | how to boss AI's avatar

Mary, I appreciate your analogy and comment. Literacy without oversight is a liability.

In this framework, we see this as a 'Human-in-the-loop' necessity. A coachable leader doesn't just hand over the keys, they must definitely need to oversee the AI. It’s not about losing agency, it’s about evolving from the person doing the task to the person governing the outcome. Truly seasoned professionals are the only ones with the context to know when the AI is hallucinating.

Dennis Berry's avatar

Great article from 2 of my favorite peeps.

I can totally see the leaders/managers not trusting the new processes and still trying to "micromanage" the Ai. I do it. LOL. How do we make that shift to trust?

Anna | how to boss AI's avatar

Micromanagement isn't a trust issue, it’s a direction gap. In the solo world, we're the entire infrastructure, so if an agent hasn't been fed the right context, a manual override is just a logical survival tactic to protect your reputation.

We’re all trying to manage tech moving at a speed we can’t even comprehend. Honestly, that 'lack of trust' is a pretty valid response to the pressure of having to know it all.

In the organizational setting that pressure is spread among teams and that's where the compatibility and coachability matter.

Joel Salinas's avatar

thank you!!! Anna explained it beautifully

Diamantino Almeida's avatar

The distinction between utility doubt and identity threat is exactly right and I'd add one layer underneath it. Even leaders who've resolved the identity question will revert if the authority structure around them hasn't changed. I've watched technically coachable people quietly route AI outputs through manual approval not because they felt threatened, but because the system still rewarded being the person who knows.

The compatibility problem is real. But sometimes it's the org that's incompatible, not the leader, in my opinion.

Anna | how to boss AI's avatar

Great addition! This is why the shift from managing people to managing hybrid teams is so radical. It’s not just a mindset change for the leader, it requires a structural change in how we define 'value.' If the system rewards the manual route the hybrid team never actually forms. We’re essentially asking leaders to be secure in a system that might still be rewarding insecurity. Thank you for bringing this very important point!

Diamantino Almeida's avatar

Thank you Anna. Great article.

Tony Marsico's avatar

Anna, this is great! Having lived through the computing and internet revolutions, I think that as we go through this revolution it is good to have a strong understanding of how the changes are initially adopted-rejected by employees. Your focus on bringing others forward with the technology is huge.

I think it is also key to focus on listening to those who are hesitant to adopt the technology - whatever the reason. Resistance isn't always based on an unfounded fear. Though we may not like or want others to emulate their resistance, we can often learn from what they focus on. Change management includes making the catalyst of change stronger through listening and learning as well.

Again, great stuff! Thanks!

Anna | how to boss AI's avatar

Totally, listening to all sides of the story is part of the diagnostic exercise and typically surfaces what’s underneath the resistance.

Thank you for your thoughtful comment, Tony!