The Cannibalization of Competence - Part I: Waiting For the AI Meteor to Strike
The Surprising Reason Why Most Companies Are Completely Unprepared For the Age of AI
FOMO.
That’s how I would describe the current age of AI: everybody has Fear Of Missing Out (FOMO) and jumps at any opportunity to use AI.
Even when it makes no sense at all to use AI.
👋 Digging the content? Let’s talk Product☕. I’m available for Fractional Product Management, Workshops, Coaching, and Speaking.
I get the hysteria. I really do. I’m also excited about AI and experiment with it every day.
Every company and every individual wants to make sure they don’t miss the AI wave as it passes by.
However we must watch out that we don’t develop AI tunnel vision. AI isn’t the only thing that matters in your organization.
AI FOMO is the very reason I believe most organizations will be like dinosaurs waiting for the meteor to strike: completely unprepared for the age of AI.
The fact they believe their AI expertise is THE problem is what will guarantee their demise once the meteor hits.
Yes AI expertise matters, but it’s not nearly enough. It’s not a magic wand that changes everything with a single wave.
Something else is going to become way more important and it has NOTHING to do with AI.
What should we do to prevent being hit by the AI meteor? Instead of asking yourself the question:
“How can our organization better make use of AI?”
You should be asking yourselves the much more interesting and useful question:
“What kind of organizations will benefit the most from AI?”
When you answer this last question, you will understand why for most organizations upskilling their AI expertise won’t make a big difference. It may even make their existing situation worse.
Let’s answer the last question by first exploring the kind of organizations that won’t benefit from AI.
What Kind of Organizations Won’t Benefit From AI?
"I don't trust any of you", the CTO blurted out.
The room fell silent. All eyes were darting around the room, looking at each other in disbelief. I was sitting in a meeting room with ten competent Product Managers working at a local department of a multi-billion-dollar organization.
The surprising answer of the CTO was a response to a question I had asked: "Why don't you let us Product Managers together decide on how we do roadmapping?”
Every single person in that room was more competent than the CTO in the realm of roadmapping. But because they didn’t trust us, our expertise didn’t matter.
We can easily visualize the above situation using the Slipstream of Trust model, but let me first briefly explain the model.
The Slipstream Model of Trust in the Age of AI
We can visualize the relationship between Competence and Trust as follows using the Slipstream Model of Trust.
X-axis - Competence: your ability to make the right decisions: can you make the right decisions?
Y-axis - Trust: your ability to call the shots: are you allowed to make decisions?
Why Competence and Trust as the two dimensions? Well, the the higher your Trust and Competence, the more Agency you can have.
Agency is defined as:
“The capacity to decide and act intentionally to produce desired results.”
The better you’re capable of making the right decisions (Competence) and the more degrees of freedom you have to make decisions (Trust), the more Agency you have.
The more Agency you have, the better you can make decisions and act intentionally to produce superior results.
What’s the purpose of the straight line running right in the middle between Competence and Trust?
It’s pretty simple:
Above the line: you have more Trust than you deserve based on your level of Competence.
Below the line: you experience less Trust than you deserve based on your level of Competence.
There is a bi-directional relationship between Trust and Competence: Trust enables the growth of Competence and Competence enables the growth of Trust.
The opposite is also true: Distrust cannibalizes Competence and Incompetence creates even more distrust.
Above the line you enjoy Agency and the potential growth of Agency. Below the line you’re actively undermining and Trustcapping Agency.
Trustcapping Agency: The Cannibalization of Competence
When you’re below the line, it’s difficult to grow Competence and Agency, because you’re being Trustcapped.
Trustcapping limits the growth of Competence and may even result in Distrust, which then further works to diminish your level of Competence.
Trustcapping employees is extremely common, especially as organizations grow in size. Have you ever been in a situation where you knew the right thing to do, but you were not allowed to do it? Congratulations you’ve been Trustcapped.
Here are some extremely common examples of Trustcapping:
Launch date is set without involving any of the teams responsible for delivering on said date.
Silowork. Teams work in silos because they trust their own area of expertise more than the expertise of others. Silos are a highly effective way to Trustcap employees.
Every team must follow the same process and way of working.
The teams are forced to work with Scrum or an ineffective Scaling Framework like SAFe.
Developers are not allowed to talk to customers and/or employees are not allowed to talk to developers.
Managers don’t give the information teams need, because they are scared of losing power and control.
Excessive approvals before doing anything.
Asking a consultant to advise you how to fix your organization and they tell you everything you already know, but now you listen because you’re paying a lot of money to them.
We’ve all been in these kind of situations where we knew the right thing to do, but we lacked the Agency to make it happen. Even if it was something ridiculously simple e.g. like getting a license for a sofware product that will unblock your whole team that’s waiting for months.
Why does it matter to understand if you’re being Trustcapped or not?
In the Age of AI, increasing Agency will become more important than ever. Trustcapping your employees means you won’t be able to reap the benefits of AI because Trustcapping Agency results in the Cannibalization of Competence.
We can explain the Cannibalization of Competence by visualizing the story I told you about the CTO who said they didn’t trust us. We were in an environment of heavy Trustcapping, which resulted in an environment of low Agency, where we were not able to make decisions and take the right course of action.
The end result was that we were burning lots of money with little value to show for. We can visualize our frustrating “I don’t trust any of you” predicament as follows:
We’re experiencing the Drag of Distrust and our Agency was being Trustcapped: We could make the right decisions, but we were not trusted to make the right decisions. When you’re not making the right decisions you could be making due to lack of Trust, you’re actively Cannibalizing your existing level of Competence.
The Cannibalization of Competence happens when you know the right thing to do, but you’re not allowed to do it.
As a result of the low Trust environment, you will struggle and be perceived as less Competent. As your Competence drops, Trust will drop even further, resulting in even more drag and frustration.
The Cannibalization of Competence is far more common than you’d think, especially at larger companies.
The Cannibalization of Competence is extremely visible when you’re a Product Manager. The biggest challenge that many Product Managers face is NOT not knowing what to do, but convincing others that it’s the thing we should be doing because they lack the Agency to do their job.
How does the Cannibalization of Competence relate to AI?
The Cannibalization of Competence + AI
Now let’s add AI to the mix: Imagine I have a magic wand that I can wave to implement an AI that instantly levels up my Competence to the desired range.
The AI magic wand is purely a thought experiment we can run to assess the impact of instantly increasing Competence. The AI magic wand doesn’t exist for the following reasons:
AI isn’t good enough (yet) to massively increase our level of Competence.
Increasing Competence is something gradual, not something instant.
You might be thinking, if it’s impossible to do this with AI, why does this thought experiment even matter?
Even with an impossible perfect AI as envisioned in this thought experiment we can explore the repercussions of being able to instantly increase Competence.
Okay, let’s wave our magic wand and see what happens:
All that’s going to happen is that you’re going to experience EVEN MORE drag and frustration because you’re still being Trustcapped and not trusted to make the right decisions. Adding AI won’t magically work to increase the level of Trust you enjoy.
When you’re experiencing the Cannibalization of Competence you already had more than enough Competence to do a better job than you’re currently doing, so leveling up your Competence with AI is completely irrelevant.
Any competence you might be gaining will immediately be cannibalized by the lack of Trust. When you’re Trustcapping Agency, there is a natural tendency for the further reduction of Agency through the ongoing Cannibalization of Competence.
The Cannibalization of Competence: Organizations With Low Agency Will Benefit the Least From AI
When you’re in an environment of low trust where your employees are being Trustcapped then adding AI isn’t the flex you think it is. In fact, it will only make things worse and frustrate your employees.
If your employees don’t have Agency without AI, then all you’re trying to do is put rocket fuel in a rusty, old land mower: it won’t change a damn thing.
Most organizations are trying to put rocket fuel in rusty, old land mowers by dousing employees with AI trainings and AI tools.
If you’re currently Trustcapping your employees and limiting Agency, you don’t need more AI, you need more Trust and higher Agency to prevent the Cannibalization of Competence.
In summary:
If your employees can’t exercise their Agency without AI, they won’t be able to exercise Agency with AI.
If you only focus on leveling up your AI when your employees have limited Agency, then you’re simply waiting for the AI meteor to strike and you won’t survive.
When AI makes your people more Competent, it’s crucial they have sufficient Agency to exercise their newly gained Competence.
The unfortunate truth is that most employees are being Trustcapped and have constricted Agency. Companies with Trustcapping won’t reap the benefits of AI, because AI requires employees to have more Agency not less.
If you’re in a situation of Trustcapping with low Agency and the Cannibalization of Competence is going on, what you should you be doing?
Stay tuned for part II that I will publish next week.
We will explore what kind of organizations will benefit the most from AI and what we must do to prevent the Cannibalization of Competence and increase Agency.
Special thanks to Ujjwal Sinha, Eman Mifsud, Jon Odo, Nick Brown, Erik de Bos, Tanner Wortham, William Richards, Koen Muurling, Barbara Hallama, and Bernhard Wenzel for their feedback.







Great read, Maarten. The core insight is deceptively simple but cuts deep - organisations don't have an AI problem, they have a trust problem.
Stop racing to automate. Start racing to empower. The companies that prioritise human agency over AI capability will outlast the ones that don't.
Looking forward to Part II!