Many AI discussions stop at AI + Human. That was the dominant narrative in 2025. "AI is an assistant" that complements human capabilities. In practice, this usually means companies buying Copilot seats and giving everyone a short training delivered by a Big4 consultant.
The real shift in 2026 starts with AI × Human. This is not addition, it is multiplication. Roles change. Decision rights move. Human judgment becomes the scarce asset. The system behaves differently, not just faster.
Most AI programs fail because leaders design for AI + Human, while the organization experiences AI × Human effects. That gap creates fear, resistance, and silent disengagement.
Adoption is not a training issue. It is a system design issue.
Yes, exactly Adam! I am hopeful moving into 2026 our thinking with AI will evolve to leverage AI in partnership, rather than just a side assistant. We have only begun to scratch the surface of what we can do with it.
This year my observation has been an initial euphoria and amazement with, in particular co-pilot as it's baked into Office that many folks use.
Then the temperature drops as it daws on people that the AI is firstly automating problems that probably should have been eliminated rather than made more efficient.
- Hey we can now produce transcript summaries of meetings for people to read so they no longer need to turn up to meetings. Well if no-one needs to turn up - why bother holding it?
And drops further because the eco-system a company has engineered for itself steps beyond the corporate AI they've purchased meaning people's day jobs involve using multiple tools most of which the corporate AI can't penetrate or use in its data set.
And so people start to work for the AI rather than the other way around.
Adoption is indeed. system design issue - no-one wants to do dull, repetitive, non value adding work - that is what the AI should be doing
My thought process aligns here, that we should be implementing AI just for the sake of AI, especially when it is about automating problems that should be just eliminated to begin with in the process. In my AI deployment work, much of my time is spent in Discovery, and many of the use cases do not make the cut.
For many corporate leaders, AI is less a capability choice and more a career accelerator.
Pick a visible problem like customer service tasks, launch a tool, run a pilot, polish the numbers, then scale the story across the organization.
The math is usually bent in four places.
1) Efficiency is defined as time saved, not business impact, time is simply converted into money.
2) Everything is pushed into the tool, even tasks that would have been solved cheaper without AI, that loss stays invisible.
3) Only direct costs are shown, the real organizational effort is hidden in BAU lines.
4) “AI output” is often sustained by quiet human manual work in the background.
That is not transformation. That is AI theater.
In these cases, life does not get better for the organization, it only gets worse in a different way, while the leader’s career quietly ignites and often jumps one or two levels up.
Thank you for the insightful analysis. I have been struggling with a pilot project for the last 3 months and you have provided some talking points for me to bring to my colleagues & collaborators. Wow. What a great way to start the new year.
Most leaders confuse 'Empathy' with 'Sentimentality.' They are wrong.
In the context of AI implementation, Empathy is an engineering constraint. It is the understanding of your layer-1 infrastructure: The Humans (Wetware).
If you deploy high-velocity silicon (AI) onto low-trust wetware without an interface strategy, the system rejects the transplant. The 'Gap' you describe isn't an emotional failure; it is an architectural failure.
You cannot upgrade the code if you despise the compiler.
What we need to do is fix the interface and then upgrade the code.
Peter, love how you frame empathy as an engineering constraint rather than sentimentality, that translation into architecture and interfaces is exactly what gets many technical leaders to finally pay attention.
The only nuance I’d add from my work with leaders is that what looks like ‘emotional failure’ on the surface is often the first visible signal of that architectural failure. People feel the gap in trust and design long before we name it as a systems issue, which is why treating those emotional responses as diagnostic data, not noise, feels so critical.
In every major migration, there is always a subset of legacy hardware that simply cannot run the new OS, regardless of how good the interface (empathy) is.
My question is about the threshold: How do you identify the specific mathematical point where 'Coaching' becomes 'Technical Debt'?
At what stage does a leader accept that the friction isn't an 'Empathy' problem, but a 'Compatibility' problem—and that the node itself must be swapped out to preserve the integrity of the system?
Peter, Anna, I love seeing the added insight from two experts like yourselves!! So true, I’ve seen what you both describe. Sentimentality is not empathy, and it needs to be balanced with clarity in policy and communication.
You’re naming the really hard edge of this work. That threshold between ‘coaching’ and ‘compatibility problem’ is exactly where my leader‑level identity work keeps bumping up against your governance lens. Feels like there’s a rich seam here on thresholds and trade‑offs in AI‑era org design.
You are absolutely correct. In my work with high-level governance, we treat 'Sentiment' as 'Signal Intelligence.'
That 'emotional failure' you describe is the earliest warning system of a policy mismatch. It is the workforce signaling that the architecture (the AI) is incompatible with the reality on the ground.
The mistake most leaders make is treating this signal as 'Noise' (drama to be managed) rather than 'Telemetry' (data to be analyzed).
Trust is the bandwidth of any organization. If the bandwidth is choked by anxiety, the system—no matter how advanced, will stall.
Peter and Ana, I’m struck by how you both frame this: empathy as interface design and emotional response as an early-warning signal. What I keep seeing in AI-era orgs is leaders treating emotional data as “irrational,” when in reality it’s telemetry showing where the architecture isn’t aligned with human reality on the ground. When leaders ignore that signal, systems drift, and teams feel it long before the dashboards do.
This article really points out the “empathy gap,” and I completely agree but we need to be clear about what that means. AI itself can’t feel anything. It doesn’t understand fear, resistance, or hope. It can only follow the data it’s trained on. If it gives biased answers, or if people get frustrated using it, that’s not the machine failing it’s how we designed it and how we introduced it.
Most AI implementations fail because they’re introduced without a real understanding of how people actually work, feel, or identify with their roles. Through this lens, the bottleneck is not adoption metrics or model capability, but whether human engagement is treated as core infrastructure rather than an afterthought.
The irony, that leaders have never had as many tools and methods to understand friction points as they have today and yet over and over the trend is 'solution search for problem'.
Yes, the empathy gap is the real AI bottleneck, not the tech itself. That MIT stat is telling: 95% of AI pilots flop because the rollout skips the very people meant to use it. Your framework nails it - especially the “human why” piece. It’s not enough to plug in tools; people need to feel why it matters.
What’s been the most surprising emotional reaction you’ve seen from teams during early-stage AI adoption?
Thanks for reading Melanie. The most surprising reaction I have seen is sometimes those who appear eager to adopt are not the ones in the end who are the early adopters, but rather the ones quiety observing the discussion of roll-out.
love that step #2 acknowledge is "acknowledged" as important in the process lol. Great read. Not super applicable to me at this moment, but will pass along to some of our clients in the corporate space.
Wonderful post, Joel, and thanks for helping to platform the work of Colette, who I also admire greatly. So much great writing here, but for me, the main thing that resonates is the idea that we can't expect organisations to effectively implement AI without first talking to their employees about the ways in which it can positively impact and also potentially negatively impact their working behaviours. A must-read for anybody thinking about implementing AI in the workplace.
Thank you for reading Sam and the kind words. Yes, to implement AI, you first have to level set with employees on the impacts, and not just the positives.
Just Wow! What a great approach. The cyclical review of friction points feels like a great way to get teams to see how AI is not a 'set it and forget it' tool. This is not a 'tool' in the usual way we think of that word at all!
Thanks for reading Adam. I've seen all too often the "set it and forget it" mentality, which leads these tools to degrade in their purpose over time. We have to think beyond the historical "tool" framework, and think of it as a whole infrastructure shift.
Many AI discussions stop at AI + Human. That was the dominant narrative in 2025. "AI is an assistant" that complements human capabilities. In practice, this usually means companies buying Copilot seats and giving everyone a short training delivered by a Big4 consultant.
The real shift in 2026 starts with AI × Human. This is not addition, it is multiplication. Roles change. Decision rights move. Human judgment becomes the scarce asset. The system behaves differently, not just faster.
Most AI programs fail because leaders design for AI + Human, while the organization experiences AI × Human effects. That gap creates fear, resistance, and silent disengagement.
Adoption is not a training issue. It is a system design issue.
Yes, exactly Adam! I am hopeful moving into 2026 our thinking with AI will evolve to leverage AI in partnership, rather than just a side assistant. We have only begun to scratch the surface of what we can do with it.
This year my observation has been an initial euphoria and amazement with, in particular co-pilot as it's baked into Office that many folks use.
Then the temperature drops as it daws on people that the AI is firstly automating problems that probably should have been eliminated rather than made more efficient.
- Hey we can now produce transcript summaries of meetings for people to read so they no longer need to turn up to meetings. Well if no-one needs to turn up - why bother holding it?
And drops further because the eco-system a company has engineered for itself steps beyond the corporate AI they've purchased meaning people's day jobs involve using multiple tools most of which the corporate AI can't penetrate or use in its data set.
And so people start to work for the AI rather than the other way around.
Adoption is indeed. system design issue - no-one wants to do dull, repetitive, non value adding work - that is what the AI should be doing
People stay to work for the AI rather than the other way around, that’s so true, and sad
My thought process aligns here, that we should be implementing AI just for the sake of AI, especially when it is about automating problems that should be just eliminated to begin with in the process. In my AI deployment work, much of my time is spent in Discovery, and many of the use cases do not make the cut.
For many corporate leaders, AI is less a capability choice and more a career accelerator.
Pick a visible problem like customer service tasks, launch a tool, run a pilot, polish the numbers, then scale the story across the organization.
The math is usually bent in four places.
1) Efficiency is defined as time saved, not business impact, time is simply converted into money.
2) Everything is pushed into the tool, even tasks that would have been solved cheaper without AI, that loss stays invisible.
3) Only direct costs are shown, the real organizational effort is hidden in BAU lines.
4) “AI output” is often sustained by quiet human manual work in the background.
That is not transformation. That is AI theater.
In these cases, life does not get better for the organization, it only gets worse in a different way, while the leader’s career quietly ignites and often jumps one or two levels up.
Yes, that is indeed the description of AI theater!
Thank you for the insightful analysis. I have been struggling with a pilot project for the last 3 months and you have provided some talking points for me to bring to my colleagues & collaborators. Wow. What a great way to start the new year.
I am so glad! Thanks for reading Chrissy and Happy New Year!
"And so people start to work for the AI rather than the other way around." Like that. That’s the moment tools quietly turn into bosses.
AI should compress execution, not redefine purpose or judgment. If people start working for the AI, leadership has already outsourced responsibility.
I appreciate the approach from MIT Sloan, which promotes the "human-centric AI" narrative. They have a series of articles on that since 2020: https://mitsloan.mit.edu/experts/human-centered-ai-how-can-technology-industry-fight-bias-machines-and-people
Most leaders confuse 'Empathy' with 'Sentimentality.' They are wrong.
In the context of AI implementation, Empathy is an engineering constraint. It is the understanding of your layer-1 infrastructure: The Humans (Wetware).
If you deploy high-velocity silicon (AI) onto low-trust wetware without an interface strategy, the system rejects the transplant. The 'Gap' you describe isn't an emotional failure; it is an architectural failure.
You cannot upgrade the code if you despise the compiler.
What we need to do is fix the interface and then upgrade the code.
Peter, love how you frame empathy as an engineering constraint rather than sentimentality, that translation into architecture and interfaces is exactly what gets many technical leaders to finally pay attention.
The only nuance I’d add from my work with leaders is that what looks like ‘emotional failure’ on the surface is often the first visible signal of that architectural failure. People feel the gap in trust and design long before we name it as a systems issue, which is why treating those emotional responses as diagnostic data, not noise, feels so critical.
A critical follow-up on the systems analogy:
In every major migration, there is always a subset of legacy hardware that simply cannot run the new OS, regardless of how good the interface (empathy) is.
My question is about the threshold: How do you identify the specific mathematical point where 'Coaching' becomes 'Technical Debt'?
At what stage does a leader accept that the friction isn't an 'Empathy' problem, but a 'Compatibility' problem—and that the node itself must be swapped out to preserve the integrity of the system?
Peter, Anna, I love seeing the added insight from two experts like yourselves!! So true, I’ve seen what you both describe. Sentimentality is not empathy, and it needs to be balanced with clarity in policy and communication.
You’re naming the really hard edge of this work. That threshold between ‘coaching’ and ‘compatibility problem’ is exactly where my leader‑level identity work keeps bumping up against your governance lens. Feels like there’s a rich seam here on thresholds and trade‑offs in AI‑era org design.
I’d love to learn more of your leader level work sometime, Anna!
You are absolutely correct. In my work with high-level governance, we treat 'Sentiment' as 'Signal Intelligence.'
That 'emotional failure' you describe is the earliest warning system of a policy mismatch. It is the workforce signaling that the architecture (the AI) is incompatible with the reality on the ground.
The mistake most leaders make is treating this signal as 'Noise' (drama to be managed) rather than 'Telemetry' (data to be analyzed).
Trust is the bandwidth of any organization. If the bandwidth is choked by anxiety, the system—no matter how advanced, will stall.
Peter and Ana, I’m struck by how you both frame this: empathy as interface design and emotional response as an early-warning signal. What I keep seeing in AI-era orgs is leaders treating emotional data as “irrational,” when in reality it’s telemetry showing where the architecture isn’t aligned with human reality on the ground. When leaders ignore that signal, systems drift, and teams feel it long before the dashboards do.
This article really points out the “empathy gap,” and I completely agree but we need to be clear about what that means. AI itself can’t feel anything. It doesn’t understand fear, resistance, or hope. It can only follow the data it’s trained on. If it gives biased answers, or if people get frustrated using it, that’s not the machine failing it’s how we designed it and how we introduced it.
Exactly, that’s not the machine failing. Well said :)
Most AI implementations fail because they’re introduced without a real understanding of how people actually work, feel, or identify with their roles. Through this lens, the bottleneck is not adoption metrics or model capability, but whether human engagement is treated as core infrastructure rather than an afterthought.
100%! You are showing real expertise here
The irony, that leaders have never had as many tools and methods to understand friction points as they have today and yet over and over the trend is 'solution search for problem'.
The truth is out there.
True. With so much out there too, it can be overwhelming.
Yes, the empathy gap is the real AI bottleneck, not the tech itself. That MIT stat is telling: 95% of AI pilots flop because the rollout skips the very people meant to use it. Your framework nails it - especially the “human why” piece. It’s not enough to plug in tools; people need to feel why it matters.
What’s been the most surprising emotional reaction you’ve seen from teams during early-stage AI adoption?
Very well said! I would say there’s generally fear of being replaced
Thanks for reading Melanie. The most surprising reaction I have seen is sometimes those who appear eager to adopt are not the ones in the end who are the early adopters, but rather the ones quiety observing the discussion of roll-out.
love that step #2 acknowledge is "acknowledged" as important in the process lol. Great read. Not super applicable to me at this moment, but will pass along to some of our clients in the corporate space.
Love it, Phil! Colette did a great job presenting it. Have a great new year, man!
Thanks for reading Phil! Yes, the acknowledgment part of a process I have found is often assumed, and sometimes, as a result, it is missed.
that makes total sense.
E.a.s.e 💙👏
Adoption follows understanding.
Wonderful post, Joel, and thanks for helping to platform the work of Colette, who I also admire greatly. So much great writing here, but for me, the main thing that resonates is the idea that we can't expect organisations to effectively implement AI without first talking to their employees about the ways in which it can positively impact and also potentially negatively impact their working behaviours. A must-read for anybody thinking about implementing AI in the workplace.
I have to admit it is a step I didn’t pay enough attention to in the past
Thank you for reading Sam and the kind words. Yes, to implement AI, you first have to level set with employees on the impacts, and not just the positives.
Just Wow! What a great approach. The cyclical review of friction points feels like a great way to get teams to see how AI is not a 'set it and forget it' tool. This is not a 'tool' in the usual way we think of that word at all!
Thanks for reading Adam. I've seen all too often the "set it and forget it" mentality, which leads these tools to degrade in their purpose over time. We have to think beyond the historical "tool" framework, and think of it as a whole infrastructure shift.
Love the EASE framework. I personally think the last step of review is the most important step!
Thanks for reading Ilia! Yes, review is often a step that is missed too, in the push to get things launched.