Last updated on December 8, 2025
I used to think resistance to new technology was just stubbornness. I was wrong. In this episode, business titan Steven J. Manning dropped a reality check that completely reframed how I view leadership in the AI age:
“People don’t like change? That’s nonsense. People enjoy the benefits of change. They just don’t like to be changed.”
We love a new car or a promotion (change we choose). We hate a new AI workflow forced upon us (being changed).
I sat down with Steven (a leader responsible for billions in revenue) and Darrell Mann (ex-Rolls-Royce Chief Engineer and innovation expert) to solve this paradox. We discuss why AI creates a “sterility of emotion,” why it fails the “Wisdom Test,” and exactly what you need to do when the rules stop applying.
Get More Classic Interviews Like This, 100% Ad-Free.
This is a classic interview from our public archive. If you loved it, MONDAY INFLUENCER insiders get new, curated, ad-free classiccs just like this one delivered to their inbox every single week.
When you join, you unlock:
- 2x New, Ad-Free Classics from the 500+ episode archive every week.
- All New Episodes (full, uncut, and ad-free).
- Exclusive Members-Only Workshops and live Q&A sessions.
- A library of downloadable audios, templates and cheat sheets.
The Wisdom Gap: When Rules Don’t Apply
We often worry that AI will replace executives. But Darrell Mann provided the ultimate counter-argument by defining Wisdom in a way no machine can match.
He argued that while AI can optimize a stable system (Level 3 leadership), it fails at Level 5. Why?
“The wise person is the person that knows what to do when the rules don’t apply anymore.”
AI runs on rules and data. When the context shifts—when the rules break down—AI hallucinates. That is where you, the human leader, become essential.
The “New Car” Paradox of Change
If you are struggling to get your team to adopt AI, stop “managing change” and start restoring agency.
Steven J. Manning pointed out that resistance usually stems from a loss of control. As Darrell added, we all have three core drivers: Autonomy, Belonging, and Competence. When you force AI on a team, you threaten their Competence (they don’t know how to use it yet) and their Autonomy (they didn’t choose it).
The solution? Stop outsourcing your thinking. Use AI to gather data, but keep the decision-making—and the empathy—strictly human.
Key Insights & Timestamps
- (00:00) – The “Being Changed” Myth: Steven J. Manning explains why we love new cars but hate new workflows.
- (01:30) – The 5 Levels of Leadership: Why AI can only mimic emotional intelligence up to Level 3.
- (03:45) – AI as a Therapist: The dangerous trend of employees using ChatGPT for emotional support.
- (07:08) – Defining Wisdom: Darrell Mann’s definition of wisdom as knowing what to do when rules break.
- (10:30) – The “Like” vs. “Want” Trap: A billion-dollar lesson on asking the right questions to get the truth, not just what you want to hear.
- (13:20) – System Failure: Why the boss doesn’t decide—the system decides (POSIWID).
Frequently Asked Questions (FAQ)
Can AI replace executive decision-making?
AI can provide knowledge, but it cannot provide wisdom. As discussed in the episode, wisdom is required when “rules don’t apply anymore,” a capability AI currently lacks.
Why do employees resist AI adoption?
It is rarely about the technology. Resistance comes from the feeling of “being changed” by someone else. Leaders must restore a sense of agency to their teams to succeed.
What is the “POSIWID” concept mentioned?
It stands for “The Purpose Of The System Is What It Does.” It means that if you are getting bad results, it is not usually a bad decision—it is a bad system design.
Full Episode Transcript
(Transcript) [Paste the full transcript provided above]
Unlock a Curated, Ad-Free Archive
You’ve just read the insights from this classic interview. Get a 2x new, curated, ad-free classic from our 500+ episode archive delivered to you every week (plus all new episodes uncut) by joining the MONDAY INFLUENCER community today.
Join MONDAY INFLUENCER NowNat Schooler: People don’t like change, that’s nonsense.
Steven J. Manning: People enjoy the benefits of change. They just don’t like to be changed.
Darrell Mann: The wise person is the person that knows what to do when the rules don’t apply anymore.
Nat Schooler: How does emotional intelligence change when executives rely on artificial intelligence for decisions? Because we’ve got a big issue there really, haven’t we?
Darrell Mann: I think we have. It’s a big question to start with, isn’t it? It’s… My immediate instinct is to give you the consultant’s answer, which is: “It depends.” Which is how consultants make all their money, isn’t it? The skill is knowing what the dependencies are.
And I think it’s only recently, so you’re right, we’ve been spending a lot of time thinking about this whole question of the human in the innovation story—never mind the overall leadership story. And it’s only been in the last few months, I would say, that things have started to crystallize. And I think it’s because of AI that it’s been necessary to crystallize things now.
So I think where we’re at… and in fact, this month’s easing that we’ve published has got a big article in it on emotional intelligence. And it really is saying, maybe we’ve got this whole emotional intelligence thing completely wrong from the ground upwards. From the way that we measure emotional intelligence to the way that we think about why is it important and what impact is it going to have on the business.
I’ve still not quite found the language to say it all simply, but the upshot is that what we found is there are five distinctly different levels of job to be done in life. And so there’s the basic stuff. So when I say basic stuff, I mean the fact that AI can’t simulate an awful lot of what emotional intelligence is from a basic level. Humans learn emotional intelligence skills literally from infancy. The baby’s got to try and attract the attention of parents, and crying does it, smiling does it. And we very quickly learn to gauge what’s happening on the faces.
So there’s an awful lot of that stuff which is kind of the basic stuff. Then at the other end of the spectrum is the: “I need the emotional intelligence to lead others through change.” And I think that is the most difficult job that any leader can have to go through. I think the step before that is: I need to lead people through a business which is essentially stable. And it’s dynamic, but we need to re-optimize things—but the keyword there is optimization. So we’re just changing the dials, if you like.
And as soon as you start thinking about this scale of different requirements that we’ve got from an emotional intelligence perspective, you realize that… how many leaders have got the capability, the necessary capability, to deal with each of those different levels of job?
And certainly when it comes to the right-hand side of the spectrum—so the Level 4, the Level 5. Let’s just start with the most difficult one. The leader that’s got to lead through change. I say that is the most difficult job we can ask a leader to do. So I’m talking here about step change. It’s not about changing the dials in the business; it’s about saying, “These dials are wrong. We need completely different dials.” We need to be measuring different things, doing different things. And classic Heroes Journey type leadership endeavor.
At that point, if we’re doing that job, then I think it’s fair to say that we certainly don’t know how to measure whether leaders have got that level of emotional intelligence. So there’s enormous problem number one.
Number two, of course, leaders have been successfully leading through those kind of changes through history. And so a big part of our research over the years has been saying, okay, well, these people that have succeeded versus the 98% of people who did not succeed—what are the differences between them? And so a big part of the “One Percenters” book is highlighting what are these traits that you can see in the leaders that can lead through significant change.
And of course, the question’s got AI in the title. That’s one of those step changes that’s coming along for almost every business these days. It’s going to change things.
So I think the summary message is that we don’t know how to measure EQ. So we don’t know actually where we are. Number two, we know that we need the very highest level emotional intelligence to go through changes like bringing in AI to an organization. And with that combination of things, then the basic conclusion is going to be 98% of things are going to fail. But if we learn from the 2%, I think we’ve got some of the indicators of, you know, what is it about those 2% that is going to be teachable, learnable for other leaders such that we can increasingly all find ourselves in the 2% of successful things.
So one of those things clearly is… we built a model called Neptune, an acronym. The first E in Neptune stands for Empath. So being able to empathize with customers, being able to empathize with people in the team, other stakeholders, shareholders, etc. So these human skills are an essential part of the challenge.
And I think AI can help with some of that up to a certain point. So if you can picture that five-level scale, the AIs that we use at the moment, they don’t understand human emotion. They can mimic it to some extent, but up to Level 3, they can. So what that means in practice is that I reckon by now we’ve got millions of people who, when they’re looking for a therapist or some advice about their life, they’re not going to go to an actual therapist. They’re going to go to ChatGPT these days.
So GPT—and other AIs exist, of course—but they go to GPT, and GPT becomes their therapist. And they’re doing that A) because it’s cheaper, it’s more convenient, but actually B) it’s very likely more helpful than their therapist was going to be.
So up to a certain level, then the AIs can mimic that stuff. Now I say up to Level 3 it can do that. So Level 4, Level 5, you’re still in trouble when it comes to leading through change and leading the optimization evolution of a business.
So AI can help up to a certain point. But I think it’s fair to say that leaders are kind of in the unknown right now. I think that’s the upshot of it all. We’ve got this Neptune scale which says, okay, these are the seven things that you need to be doing. Empath is one of those things.
But if you had to pick out one thing from that… what does empathizing with other people mean? I would say that when we look at the characteristics of the “One Percenter,” what you find is they’ve largely been able to put ego on one side. So becoming the ego-less leader.
If you’re trying to make big changes inside an organization, like bringing AI into the organization, things are going to go wrong. And as soon as you’ve got a leader who’s trying to defend why things went wrong, then that’s not going to end well for anybody. When things go wrong in Level 5, what it means is that you’re learning stuff. And it becomes then about having the humility to recognize that, okay, we tried an experiment, it didn’t work out, but what we learned from it allows us to design a better experiment this time around.
So I think that humility of “we don’t get it right all the time and that’s okay” becomes absolutely central to the question you’ve asked me, which is how does EQ change in the AI world that’s on its way to us?
Nat Schooler: Wow. I mean I’d sort of looked at this slightly differently, but you’ve sort of dug really deep there into that. And it is… it is so new that we don’t know, do we? At the end of the day. It’s one of those things.
So onto my next question, and then we’ll see what Steve thinks about that. So what leadership skills separate executives who master AI from those who get replaced? Because we got a big issue there really, haven’t we?
Steven J. Manning: You know, it’s a masterclass from Darrell. I just thought a couple of things… not parenthetic, but a couple of things.
One, you talk about change. I happen to think one of the great oxymorons out there: “Change is a constant.” That’s oxymoronic at its core. Maybe true, but it’s coincidentally so.
And the thing that I focus on a lot is that people really… you know, people don’t like change? That’s nonsense. People enjoy the benefits of change. They just don’t like to be changed. And that creates the leadership elegance, if you will, of how to create new leaders, how to make people follow that leader.
By the way, empathy is a big deal in that formula. So again, people like the benefits of change, but they don’t want to be changed. So how do you make them? It’s the whole notion of buying in to that leader. They can’t do that.
And I ask about… you talk about the fundamental question which you answered so elegantly [Darrell]. I think about it a little more pedestrian, if you will. How does emotional intelligence change with AI for decision makers?
Now I caution and worry about—and you’ll forgive this, this is pedestrian—I think about creating sterility of emotions and creating forward-looking stuff. In other words, what exactly do those folks… what are they? Who are they? What do they think? What do they really want? I can’t force emotion down anyone’s throat as a leader. But I can lead people to a better solution if you will. And then if I can do that with empathy, I think I have a winning formula.
And something that you said Darrell is so critical and so few people manifest, is the leader, the strong leader, the successful leader who doesn’t have it in him to say, “Oh, I blew this one.” And you know, we don’t really get things… everything right all the time so I want more. I want to know more. Help me out here, please. I love the leader who looks at a team and says, “Help me out with this.” Okay? I’ll make a decision but not in a vacuum.
And one final thought, Nathaniel, and forgive the interruption, is I giggle thinking about GPT being my therapist. I am fascinated by… I’ll tell you why. For a whole lot of reasons. But this morning, one focus. I have spent, oh maybe three or four hours messing around with Elon Musk… oh, let me rephrase that… Grok.
And because Darrell put this bug in my head a couple of months ago about AI hallucinating. So I said, can I compel, can I do something in a research sort of thing where I can get AI to hallucinate? The short answer is yes.
Now the interesting thing about what you said was: GPT as your therapist. So I have hallucinations. I see things that are not there, I hear people that aren’t alive, I’m hallucinating. So I’m going to go to Chat GPT who will now partake in my hallucinations with its hallucinations. I think that’s hallucinating at a turbo boost! That just was my thought.
Nat Schooler: Well that’s a good point. I actually… so from my point of view, that question opens up the whole conversation that Steve and I have all the time, which is outsourcing your thinking, right?
And I personally believe, from where I’m sitting, my perspective is the moment that leaders outsource their thinking, that is when emotional intelligence is going to suffer because of what you’ve just said on this scale of three out of five on your scale, right?
But if they use their brains and actually ask the language models—because we’re just talking about language models here really—I think that if they ask the right questions and they have the right prompts and they get the right information back to make decision making… I personally don’t think emotional intelligence should change at all. What I do think is that actually the accuracy of decision making should—I’m saying that, should—be a far more accurate thing in my personal opinion.
Darrell Mann: I mean, I certainly agree with you on the leadership skill that’s now become imperative, and that’s being able to ask the right questions. Whether that’s asking from the rest of the C-suite leadership team or of GPT or whatever AI it is.
And I think being able to ask the right questions, being able to choose the solutions—so in other words, the human, the leader is still accountable for making a decision. The AI is just there to provide the knowledge.
But I do think it makes AI change because, what does asking the right question mean? So as soon as we put ourselves in that change scenario—so I agree with Steve, people love change but they hate being changed. So one of the core human drivers is we all want to feel like we’re in control of our destiny. We all want agency. We all want to belong to something. We all want to feel competent.
So as soon as you’re doing “change anything,” then that C—the competence part—is going to get worse. Okay? Because as soon as you start trying to do something you’ve never done before, guess what? You’re going to fail. You’re going to get things wrong. And people don’t like that.
So I think the leader’s skill in that sense becomes: okay, well I’m asking the team to change now. I know that means that there’s going to be a whole bunch of feeling now about incompetence because I’ve asked them to go somewhere where they don’t know where it is yet. They’ve never done it before. And so there’s an incompetence challenge ahead of me. And it’s even worse, it’s going to be an unspoken one because nobody’s going to come back to you and say, “I don’t know how to do that.” Because people want to be helpful.
But I know that as a leader, that incompetence barrier is there. And the way it’s going to manifest itself is in all sorts of plausible deniability and reasons why this thing is not going to work. Rather than actually let’s go off and try doing it.
So I think part of the questioning of the AI is recognizing those human fundamentals. My team that’s working for me on this change project, they want to feel autonomous, they want to feel like they belong, and they want to feel competent.
It means that the questions I’m going to ask need to be I think different from the ones that I would normally ask. Because I think what leaders have been trained and taught to be asking about is… and really it starts, particularly when I’m working in the US, is: “It’s not personal, it’s business.” It’s… we hear it all the time in movies. And suddenly what AI does is it flips that story around 180 degrees. It’s completely… it’s all personal now.
And part of that “it’s all personal” is knowing what to ask the AI in terms of…
Nat Schooler: Or… but I’m going to stop you there a second. Or what you need it to ask you. Because if you turn around to the AI and you say, look, I want to create a document, as an example, right? I want to create an InfoSecurity document. What questions do you need to ask me in order to create that document? As one example, right?
And so for me, I just think people are just using it completely the wrong way, right? And that’s sort of part of the question that you’re answering that I haven’t really asked yet, but you’re sort of talking around it already.
So what leadership skills separate executives who master AI from those who get replaced? And I mean this is… we talk about this all the time, all three of us do.
Darrell Mann: So my thoughts on that… and I may well be biased here because I’ve just finished off a book on solving ethical contradictions. So the final chapter of the book is about wisdom. Now I’m not sure I’ve got any qualification at all to talk about wisdom, but that’s the final chapter of the book and that’s where we are.
So the reason I wanted to put that chapter in is because for a large part of my working life, the way I’ve defined wisdom is as “knowledge times context.” Okay? So it’s being able to contextualize knowledge that makes me into a wise leader. But it’s not been a really useful definition.
I wish I could claim that I invented it, but I found it a couple of months ago and thought, “Oh my goodness, yeah, there’s the definition of wisdom that I would like in any leader that I’m working with.” And the definition was: The wise person is the person that knows what to do when the rules don’t apply anymore.
Nat Schooler: I like that.
Darrell Mann: And I think that takes you very much into that change space. Because you encounter those situations as a leader where I think you see contradictions all over the place. And the ethical contradictions book says, look, sooner or later, all those contradictions you’re experiencing—Right versus Right situations… I want to give the team autonomy, but at the same time I need to control what’s happening. So how can I resolve that Autonomy versus Control situation? I want people to break the rules, but dammit, I don’t want people to break the rules! What am I supposed to do in that situation?
First of all, those are really good questions to be asking the AI. But the answer to your question I think is: what are the skills that separate the ones who are going to thrive with AI and those ones that are going to get replaced?
The ones that are going to survive are the ones that know how to work in that situation where… it’s the wisdom thing. The rules don’t apply anymore. What am I going to do now?
Nat Schooler: Right. What do you think on that then Steve? This is something we talk about as well a lot of the time. It’s the wisdom on the firing line, right? And being able to adapt to that.
Steven J. Manning: Look, ultimately there’s no substitute for wisdom on the firing line. And that is something that… there are leaders and there are leaders. And I relate to leaders who act. Who cause. Who drive the game. Who create the action, if you will.
Of course, creating action… thoughtlessly creating action, you might as well jump out of an airplane without a parachute. That qualifies what a leader should, should not do. That’s a real fundamental as how leadership has changed certainly in the last few years.
The bottom line, and I’ll go back, is the really effective leader today can lead many different ways. We talked about lead by example, by checkbook, by dictum, by force of nature, charisma, etc. laissez-faire, all of that. Bottom line is a successful leader today has to be a really accomplished manager as well. You need to know how the darn thing works. Because just because you dream it, it’s not going to happen. You know, might as well go in the backyard, close your eyes real tight and wish to be taller. Effort alone to mold yourself… not going to happen.
So that’s a lot to be said though for understanding different parts of the business that you’re building. Like you have to understand it. Otherwise how are you going to explain to people what to do?
I want to go back to something that you’re talking about again. To be brief. Just a fundamental… a fundamental thing here. AI. Our interaction with AI. Which we do when we get in the car. Which we do when we buy a new refrigerator. Which we do when we pick up a mobile phone. Whether we recognize it or not, that’s the environment. That’s the milieu that we work in.
The well-informed person who is so focused can game AI. You talk about asking the right question. I am really big, as you know, on asking those questions. I think you win the war when you ask the proper question. Then you have to pursue it. You have to fight the war.
But I think that it’s important to recognize that you can game AI. You can do it deliberately or inadvertently.
An interesting lesson… and it has to go… this ties right into your observations on wisdom. A lesson I learned many, many years ago. I was very young, for a boss. Who was brilliant, but man, defined flawed he was. But he was brilliant.
And there was… there was the time when he… I sat again for an eight hour session on a discussion with my boss. Because he was far more adept at Inquisition than the Spanish Conquistadors were. Simple question: “Do you think we should do that or that?” Well, you know, two days later and Sunday night at 10:00 and he’s still asking the same question.
I learned a great deal in the process by the way, and I believe I lost most of my hair in the same process. But I remember one day he said to me, “I’m not getting the answer I like.”
Really? “I like to be taller, richer, thinner, and I like every woman on earth to swoon over me.” That’s not going to happen. That’s the “like” part.
And then he said, “Okay, let me rephrase that. I’m not getting the answer I want.”
Okay. Now we have a critical failure. And I remember that day very clearly. I was very young. I think the company might be doing half a billion at the time. I said, “It’s your business. If you want to blow it up, you go right ahead. I just work here.”
So the point being is that, well hell, you can game management. You can game leadership. And this was a leader who ultimately… his innate intellect overcame all of that when one day he said, “Everything I’ve done for the last two years did not work out. You give it a shot. We’ll sink or swim with you.”
Right. I think I had a minor cerebral hemorrhage because I had no clue. But, you know, you say yes or you say no. Difference being you say yes you get fired a month from now, you say no you get fired now. The rest is history.
But I am focused on wisdom. And you need to have the wisdom as a leader to… well first you need to have the intellect, the background, whatever. Oh, the resources! Most of the time human. To interpret AI and everything else. To ask those fundamental questions. Then you have to have the wisdom to listen to all of that input. And then you go right back into the analytics of it and figure out what is really relevant and not. Not what you like, you want. What is relevant and what is not.
That’s where the ball game begins and ultimately ends. Just a comment on… you don’t need clarification Darrell, everything you said stands on its own exactly where it needs to be. I’m just contemplating my ability to game AI. And then if I’m a charismatic tough leader and I speak louder and tougher and I sign the paychecks, I can get the organization to follow. And when they walk out of the conference room they all say, “Guy’s out of his mind.” Never going to work.
Nat Schooler: Yeah. So I think if I remember rightly, at the end of our last conversation we were talking about one leadership skill. And I think my answer was systems. So a lot of those scenarios you’ve just described Steve… boss says “this is what I want.” I think one of the veils we all kind of wear as leaders or the led is that the boss decides. So in other words, the boss decides an objective and everybody’s going to work together and we’re going to achieve that objective.
That’s just not the way the world works. The boss does not decide. The system decides. And if you’ve got a boss who doesn’t understand systems, then unfortunately there’s going to be a lot of frustrated people, including the boss, who is finding himself or herself banging their fists against the desk because, dammit, what I asked for isn’t what’s happened. But it’s not anybody’s fault. It’s a lack of understanding of the system.
The Stafford Beer acronym was POSIWID: The Purpose Of The System Is What It Does. Yeah. And so if the system’s giving you a bad result, it’s the system that’s doing that. It’s not anybody’s decision. It’s just all the things that are interacting with each other that are conspiring to give you this negative thing.
So I think back to the gaming the AI… I think you can certainly do that. But I think a lot of people inadvertently game the AI because they don’t understand systems and therefore they don’t know what questions to ask the system.
Steven J. Manning: Even more fundamental than that. It seems to me like all the… I mean we’ve played around with Perplexity, Claude, GPT etc. My preferred one these days seems to be GPT because it seems to understand me better than the other ones. But the AIs have got an algorithm which basically means that they are people pleasers.
Unlock a Curated, Ad-Free Archive
You’ve just read the insights from this classic interview. Get a 2x new, curated, ad-free classic from our 500+ episode archive delivered to you every week (plus all new episodes uncut) by joining the MONDAY INFLUENCER community today.
Join MONDAY INFLUENCER Now
