How Noise Causes Problems with Our Decision-Making Capability

How Noise Causes Problems with Our Decision-Making Capability

Organisational Success Podcast

There are many things that interfere with our ability to make good decisions and judgements. In a new book by Daniel Kahneman, Cass Sunstein and Olivier Sibony, noise is identified as having a major negative impact on both our decision-making and judgements. In this fascinating interview, David talks with co-author Professor Olivier Sibony about what noise is and how and why it has such a significant impact. 

Podcast: Noise


Professor Olivier Sibony

Professor Olivier Sibony

From 1991 to 2015, Olivier was a consultant, Partner and Director in the Paris, New York and Brussels offices of McKinsey & Company. Among other roles, he served as global leader of the Corporate Strategy Practice, as a European leader of the Consumer sector, and as a member of the Firm’s global Partner Review Committee. From an industry sector perspective, he has extensive experience in Consumer Packaged Goods, Luxury Goods, Retail, and Private Equity.

Olivier Sibony is now a Professor at HEC in Paris and an associate fellow at Said Business School here in Oxford. Olivia is a writer and educator and a consultant who specialises in strategy, strategic decision-making and the organisation of decision processes. And he’s the co-author with Daniel Kahneman and Cass Sunstein of the major book Noise: A Flaw in Human Judgement.

Connect with Olivier

The Book: Noise: A Flaw in Human Judgement

Noise: A Flaw in Human Judgement

Get the Book:


Noise with Olivier Sibony

[00:00:00] [00:00:26] Today. I’d like to welcome professor Olivier Sibony who works at HEC in Paris and the Said Business School here in Oxford. Olivia is a writer and educator and a consultant who specializes in strategy strategic decision-making and the organization of decision processes. And he’s the co-author with Daniel Kahneman Cass Sunstein.

[00:00:53] Of the book noise, a flaw in human judgment, published by Harper Collins. Welcome Olivier. [00:01:00] [00:01:01] Thank you, David. This lovely to be here. 

[00:01:03] It’s really nice to talk to. I loved the book. Can you just give us a little brief overview of your background and what kind of led you into being part of writing this book?

[00:01:15] So I’m not an academic by training. I was actually a management consultant for 25 years. I worked at McKinsey and about six years ago, I left McKinsey and decided to. As a professor and to specialize in this topic of decision making. And the reason really is that having been able to absorb decisions from the privileged position of this semi outsider, a semi insider that you are as a consultant it gave me.

[00:01:45] Lifelong fascination for how decisions are made and how sometimes they are poorly made it. So that led me to study cognitive biases and from biases to start working with some of the people who knew the topic [00:02:00] much better than me, and that led to noise together with

[00:02:07] Lovely. So just to start us off, what are you referring to? What is noise? 

[00:02:13] Very simple. Noise is unwanted variability of human judgments. At least that’s how we define it. You could define noise in many different ways to distinctions, have other definitions. And we all know what noise is on the street, but we define it as the unwanted variability of human judgments.

[00:02:32] The keyword here is unwanted. And before we get into what noises and how bad it is, it’s very important to preface this by saying that there’s a lot of situations, a lot of situations where variability is good. Diversity is great. Creativity is great. The mission is great. Political pluralism is great in all situations where it’s okay to disagree or even it’s actually desirable to disagree.[00:03:00] [00:03:00] We certainly should. So let’s put that aside, disagreements make markets. Yes, they do. Disagreements. Creativity make innovations. Absolutely diversity in tastes is wonderful. It certainly is. All of this is great. Now, when you are going to a doctor and he tells you one thing and you go to a second doctor and he tells you something completely different, you are not being creative.

[00:03:26] There are not being diverse. There are not being innovative. One of them, at least if not, both are being wrong and that sort of variability in judgment. Expect less variability or new variability is what we call nos. And our central thesis is that there is a lot more of it in human judgments than we think it is a lot more damaging in organizational judgments than we assume.

[00:03:57] And it can and should be tackled much more [00:04:00] actively than it is. 

[00:04:02] Yes, I think it’s more prevalent in human judgment. The judgment making the most people assume. We’re talking about noise in judgment making and in the book you make this distinction between judgment making and thinking. Can you just explain that?

[00:04:22] That’s what I was trying to allude to judgment is not synonymous with sinking because a lot of thinking is not what we go judgment and professional judgment here. If you’re new thinking about whether you like. Bob Dylan better than Taylor swift. Do that takes some thinking I’m sure. Not for me, for some people, probably something good.

[00:04:45] That’s not a judgment in which we expected, meaning when you are being creative about something, that’s very important type of thing. We don’t expect agreement. So we have an intentionally narrow [00:05:00] definition of judgements here for the purposes of this book, which is judgements where we expect people not to stray too far from the same answer where we have an expectation of bounded disagreements.

[00:05:15] Of course if we call something a matter of a judgment, That implies that we don’t expect people to be in perfect agreement. Otherwise we wouldn’t say it’s a matter of judgment. You would say it’s a matter of fact or calculation, and there would be no date. When we say something’s in the better judgment we expect some disappeared.

[00:05:32] The surprise is not its existence. Its magnitude is the fact that there is a lot more of it than we think there is being. And there’s a wider I suppose there are widened numbers. Latitude of judgments that occur within organizations. Whereas what we’re talking about here is an accuracy of judgment within, as you say, bounded disagreements, so that it’s tighter and [00:06:00] more accurate at least we be expected to be tighter.

[00:06:02] Let’s take some examples to bring this to life. You take fingerprints, examiners who are looking at fingerprints and trying to decide. The latent print from the crime scene matches an exemplar prints from the suspect. If they disagree, we have a problem. Now we imagine that maybe they would disagree once in a million, at least with being conditioned to think that this is the sort of accurate.

[00:06:29] That we should expect. It’s a lot more than that. It’s a lot more than that when they, that they disagree. And it’s unfortunately, a lot more than that, then there’s going to be error as a result, doctors looking at x-rays or mammograms or stuff like that. There is a lot more disagreements than we assumed.

[00:06:51] Judges, judicial judges looking at the same cases, the same trial. Can have completely [00:07:00] different reads on what punishment, the person who’s been found guilty should receive those aren’t insignificant situations. Those are simulation situations with very serious consequences and where the quantity of noise that we find is at a minimum surprising.

[00:07:23] And in some cases, downright flabbergasting. Yes. Yes. And as you say, in the case of judges, you use this example in the book about the variability of sentences that people with very similar crimes can get depending on the judge, but also depending on the time of day and a whole series of other factors that can create variability where you wouldn’t expect it.

[00:07:48] So when we say very similar crimes, we. Dangerous territory here because yes, practitioners of the digital system would say no two crimes are identical. No two cases [00:08:00]are the same. And that’s why we need the the immense wisdom of the judges to distinguish the special, unique features of each situation.

[00:08:10] And I do not dispute that. We totally agree with. What we should consider are situations that are artificial situations, study situations in which you give a number of judges, the same case. In fact, you give them a simplified case. One study when fairly large study that we talk about in the book, it’s also an old study, but I don’t think it would be very different if we did it today, might be worse is a study in which 200.

[00:08:40] Federal judges in the U S looked at the same vineyards. And you find that the mean difference between two judges or the median difference between two judges more precisely is roughly half a little bit more than half of the mean sentence. If the mean sentence is seven [00:09:00]years, pick two judges at random, you’re going to have one who says five in the other states.

[00:09:05] That means that in the health of the cases, the difference is going to be larger than that. Of course, now that hasn’t got anything to do with the crime, anything to do with the defendant is everything to do with the judge. It’s hard to imagine that. No Congress or whoever, whatever country you’re in, but Congress in the U S intended that to be the case intended the identity of the judge to be such a large determinant of the punishment.

[00:09:39] And then there, the problem you mentioned, which is that within the same judge, extraneous circumstances, The time of day or the weather or whether the, his favorite football team loves the game yesterday seem based on large econometric studies to have a [00:10:00] discernible, not a large impact, but a discernible impact on the sentences that are met out out and that again is troubling 

[00:10:07] Now a lot of people would assume that this is a bias and I suppose the question is what’s the difference between noise and a bias and how are they connected? 

[00:10:18] So each of these factors that I’ve described to the extent that you can identify them can be called to bias.

[00:10:26] So you can say the fact that judges are. More severe on the hot days then on cool days is arguably a bias. The fact that one judge is more severe than another is certainly a bias of the judge. What creates the noise in a system is the variability of those biases from one person to the next, when the system allocates those individuals to cases randomly from the perspective of the judicial system.

[00:10:56] Or from the perspective of one defendant facing the [00:11:00] judicial system, the fact that you’re going to be assigned to judge A or judge B is essentially random. And if your sentence is largely dependent on that, from your perspective, that’s an entirely unpredictable outcome. Therefore it’s noise. If you were told that the system as a whole is biased against a certain category to which you happen to belong

[00:11:25] that would be a predictable deviation and average deviation from what would be adjust centers. And you would be able to say there is a bias in this system, the bias against people like me, but this isn’t what we’re talking about here. It’s the random effect of a number of different inputs. Some of which can rightly be called biases, which creates a completely random and unpredictable outcome,

[00:11:51] that’s why we call it noise 

[00:11:53] Got you. Okay. And I just want to skip on a little bit, because in chapter 17, you [00:12:00] refer to, or you conclude actually that noise is a larger component of decision or judgment, error than bias. Can you just explain this a little bit? 

[00:12:09] We conclude or I should say we speculate, because

[00:12:13] one of the things that we intended in writing this book is to inspire more people, to do research on knowledge and to get better data on the noise. Let’s backtrack a second. Why are we writing this book now? And it hasn’t been written 20 years ago at the same time as a lot of the work on biases because people don’t notice noise as, as easily as they do bias.

[00:12:35] Er noise is . Harder to wrap your mind around bias has a sort of charisma it’s. So nice culprit. We love to be able to point our finger at bias and to say, ha, that’s the reason we’re making so many mistakes, noise isn’t that sexy invoices is a statistical construct. It takes many observations to diagnose it.

[00:12:57] And once you become aware of it, it’s a [00:13:00] very real problem. But. There hasn’t nearly been as much research on it. In fact, much of the data that we found or noise was trying to find something else it’s sometimes it was trying to find buyers. Sometimes it was trying to measure accuracy. Sometimes it was something completely different.

[00:13:19] In part why it took us so long to find that research and to write this book. So we’re hoping that there’s going to be more research that can answer definitively. And certainly in more situations, the question of is there more bias or more noise having said that, why do we speculate that noise is at least as large, a problem in general as bias, which is a very broadened program.

[00:13:46] Overgeneralizing conclusion, two reasons. First that’s the case in the studies where we’ve been able to compare that in all the studies and there aren’t that many, but in all the studies where we’ve been able to look [00:14:00] noise counts for more error than bias. And the second more principle than, so I guess is if bias was as large as noise

[00:14:14] it would be obvious if we measure noise as standard deviation to make a long story short and fairly simple, and certainly understandable to your audience here. If you have a bias of one standard deviation. That’s a pretty large bias. And if you have on any measure, a buyers that equals plus or minus one standard deviation, it’s very unlikely that you are not going to notice it and fix it.

[00:14:45] So it stands to reason that there is more noise than bias, simply because noise expressed in standard deviations. IE in units of noise would be intolerable if it were more than one. [00:15:00] [00:15:00] Yes. Yeah, I agree. And in fact, you’ve just referred to system noise and you are kind of a separating out noise within a judgment and system noise.

[00:15:11] What are you referring to here? What is system noise and why is it an issue for organizations? 

[00:15:17] System noise is what we’re talking about. When we say the judicial system. Is a lottery from the perspective of one individual you don’t know and you can’t know and there is no predictability of the judgment that you’re going to get.

[00:15:31] So as a system, the judicial system has noise as a system, the medical system has noise because different doctors or the same doctor at different times are going to give different judgements. When we describe it this way, We highlight the fact that noise is a problem of organizations. It’s a problem with systems.

[00:15:52] You, as a person, people sometimes ask us, but what can I do to become less noisy? You, as a person can’t do much to be [00:16:00] less noisy, you can try to be more disciplined in your thinking and more rigorous and so on. But what defines noise to a large extent is how your judgment differs from the person sitting in the office next door, who is expected to produce.

[00:16:14] an identical or similar judgment to yours. And you only control part of that. The other part of that is the person sitting next door and all the other people sitting on the other offices of the same floor. So noise is a problem of organizations. That’s why we call it system noise to highlight that.

[00:16:34] And that helps us to then break it down into its component parts and to be able to tear them apart. 

[00:16:42] And it’s that disparity in judgment across people that we expect to be experts and have an actual tighter agreement with with their judgments. And in the book you refer to the illusion of agreement.

[00:16:57] Can you just explain this? [00:17:00] [00:17:00] So we mentioned the illusion of agreement in a couple of contexts, the main context. If you’re one of those experts, let’s take an example that we that we have in the book of experts in an insurance company who are, for instance, underwriters, who said the price of insurance policies, the premium that a customer is being drafted back.

[00:17:25] These people are experts. They are actuaries. They’ve been trained for many years in the company. They are following the rules and the procedures of the company. And they fully expect that if one of their colleagues was looking at the same case, she would come very close to their judgment. They all very surprised when they discover that is far from being the case.

[00:17:50] The quantity of noise is roughly five times larger than the expect in that organization. And that surprises them because they had been living. [00:18:00] Under the illusion of agreement, they were living under the illusion that if someone else whom they respect and trust and who has the same training and background had a chance to see the same evidence that they see, she would of course, concur with them or disagree very mildly because, Hey, we’re not machines.

[00:18:21] We can disagree a little bit, but they have no idea that this illusion of agreement is an illusion. That’s one of the main reasons why we don’t pay attention to those. We don’t notice it. And why people, when they discover this book say, wow, I, it’suseful. And at the same time, I had never thought about it.

[00:18:42] That’s really true.

[00:18:45] Yeah. And I like just taking this into organizations UN organizations failing to well, pay any attention to the noise that exists within their organization, particularly in kind of judgment making decision-making. Can you give us [00:19:00] some examples of that kind of organizational? knowledge

[00:19:02] That’s the other sort of deafness to noise or blindness to those is a neutral how you call it is indeed organizational. So as individuals, we live in the illusion of agreement organizations, in fact, perpetuate that illusion and maintain that illusion by making sure in many subtle ways and sometimes in not so subtle ways.

[00:19:29] Noise does not get revealed that it does not get exposed in that insurance company. It is a bit embarrassing to realize that there is such a large discrepancy between two experts. So it makes sense from an organizational standpoint, to make sure that in fact, two underwriters do not look at the same case and that this at least not separately, and that if they do look at the same case, they actually discussed.

[00:19:57] While they’re looking at it and get a chance to [00:20:00] exchange our views and to convert when we are hiring someone and both of us are interviewing these candidates. The common practice is in most companies is not for us to interview the candidate completely separately. Yes, of course. We’re going to meet him separately, but if you’ve seen him first.

[00:20:20] You might come to me and say, truly great. If you could see this candidate rather quickly, because, oh, I don’t want to bias you, but you see him right. You’ve biased me. If you had come and said, it would be good if you saw this candidate. Yeah, we’ll see him. When you have a moment, I’d just like to get your thoughts on him, right?

[00:20:38] Do you mean this is fairly subtle, too subtle, but fairly subtle. Sometimes it’s not nearly that subtle. You walk into the meeting to discuss the candidates and the head of the department says I think we found the perfect woman for the job. What do you guys think? And, we’ve all been there, right?[00:21:00] [00:21:01] Organizations do this because it saves them the trouble of dealing with noise. And it’s a very widespread thing. We have several stories about it in the book. My favorite one is the university where the admissions department. Had several people review the files of candidates and their practice was that the first person would write her notes on the file and then pass it on to someone else.

[00:21:27] So one of the researchers we interviewed told us he recommended to that university that they should not do that. They should make sure that the judgments would be independent and that the second person should not see what the first person had written when she was making her judgment. And their answer was.

[00:21:44] Oh, yeah, of course we know that. And in fact, that’s how we used to do it. If we disagreed so much that we decided to switch to the current system, which really tells you that this organization, like many other organizations, values, [00:22:00] consensus, and agreement more than accuracy. Yes. And it’s that kind of closing it is to that kind of noise that creates those kinds of problems.

[00:22:13] One of the other concepts that, that you explained in the book is the idea of naive really. The impact that it has within organizations. Could you just explain that? Because that’s really realism is the illusion of agreement that I was talking about earlier is the fact that if I’m one of those underwriters or at least it’s the root cause of the illusion of agreement, if I’m one of those underwriters in the underwriting department of the insurance company, I truly think that I see the case in front of me the way it is.

[00:22:45] And that you, the reason I see it that way is not because I’m projecting my biases or my views or my experiences, but because that’s the way it is. And that’s why I assume that someone else who has reasonable and willing to keep to [00:23:00] them and not ill intentioned in any way most see it. Yeah. Roughly, if not exactly the same way that I see it, because that’s how it is.

[00:23:10] We, we don’t as Danny was writing in his previous book, in fact we don’t go through life imagining alternatives to the way we see the world. And that’s a fairly profound, psychological. It is an, it also impacts how people deal with uncertainty as well. We’ve within my area. So we get a a S a similar kind of thing coming out with people.

[00:23:38] Thinking about other alternatives to the situation as they’re perceiving it at the time. And the people who are really good with uncertainty tend to go looking for stories and examples that don’t fit their model because they’re trying to find the reality of the situation as opposed to making the assumption that they’re seeing reality.[00:24:00] [00:24:00] Quite a lot of people do I think so. Yeah. What I got fascinated in the book about noise audits. Could you just explain these and what the main elements of a noise auditor? He’s the example of the underwriter is that I was describing, how do we know that there was so much. Because we did the noise of it.

[00:24:20] The example of justice that I was talking about earlier, how do we know that there is noise in judicial system? In that example, with the federal judges 217,000 hours, that’s the noise of it. So how does the noise network? You take a number of judges in the abstract sense of judgment. They could be federal Rutgers, but they could also be underwriters in the insurance company and you put them in different rooms so that they don’t communicate.

[00:24:47] And you give them the same cases and you ask them to independently, give their judgment on the same cases and you just measure the results. And that’s when you [00:25:00]discover how much noise there is, it could be. And the reason this is important is because it could be the case. We haven’t seen it so far, but it could be the case that you find.

[00:25:10] It’s okay. We have noise. Of course, we have to have noise. This is a matter of judgment, but we think the amount of noise is tolerable. We can afford it. It’s the start, the problem for our business or for our organization that we have to deal with. So this gives you a measure of the magnitude of the problem.

[00:25:29] It will also, and in most cases it will improve. Convince people that something must be done without that evidence, people will say in theory, we see the problem, but it can’t be the case for us because we are different for so good. And we’re unique and wonderful, which everybody thinks they are. Yes, that’s definitely.

[00:25:58] Done a noise audit. And [00:26:00] you’ve found that you’ve got this variability of judgment going on within an area of work. What do you do about it? You try to find remedies to knows about. Help you deal with the problem we call this decision hygiene. You try to put in place practices that promote decision hygiene, which means that there is going to be less noise in your judgment.

[00:26:25] We use this quaint metaphor of hygiene because noise is not a disease that you cure. The bias is a disease that you cure. You can say, we have this bias, we’re going to push back against it, but most can go in all directions. So you need a prophylactic approach. You need an approach where you protect your decision from the many influences that could polluted and create noise in all kinds of directions.

[00:26:56] How does that. We’ve got a [00:27:00] repertoire of techniques, a toolkit of techniques that in some situations may work and in others may not. And it takes some judgment to figure out which ones to use by the way. That’s one of the topics on which we do hope that more research will be done. One example, for instance, that is well-known.

[00:27:18] Yes. Aggregating independent judgments taking the average of the bunch of judgments. That’s the well-known wisdom of prose logic. We’ve talked about the danger of doing this without independence and of having social influence in the context of the judgments. If you can put that under control, aggregating multiple judgements is a pretty good way to reduce the problem of knowledge.

[00:27:43] It is of course, costly. Every patient be seen by six doctors and take the average of the judgments of the six doctors. It’s going to be a bit expensive if you do that. So it doesn’t apply to every situation, but, it’s one approach. Another approach that [00:28:00] works wonders in many situations is to structure judgments, to introduce some sort of guidelines that tells people when you’re meeting with candidates for.

[00:28:12] We don’t just want to know whether you like the candidate or whether you think the candidate is good or great for the job. Here are the three or the four or the five or the seven things on which we want you to rate the candidates. And we want you to rate them separately on those dimensions. And by the way, on some of those dimensions, Get the rating, not from an interview, but from another data point might be a test.

[00:28:41] It might be a job sample experiment where we ask people to produce some of what they would have to do during their job. It might be the grades they got in school. If this is relevant, it might be some sort of source of information. Separate and not polluted by the judgment you form in an [00:29:00] interview. We can then combine this with aggregation by having multiple people opine on those different dimensions of the Germans and combine the structure judgments with the multiple independent judgements.

[00:29:14] And if we do that, we’re going to have structured recruiting. Has been proven for many decades to be superior to the traditional instructor interview. So that’s another example. There are a few more, and you mentioned the importance of keeping a mindset where you go look for information that could contradict your beliefs.

[00:29:35] And that would unsettle you a little bit. That’s another one. Yeah. There’s a few more techniques like this, but the basic idea is. Put your decision process under control, introduce some discipline in it. Try to be more rigorous in your thinking. And this is an unpopular thing to say, but I’m going to say it anyway.

[00:29:55] Don’t view a situation of judgment as the place to [00:30:00] express yourself. Don’t view it as the place to express your individuality and to show your peers how right you are against. If that’s the way you view it, you will likely to add noise because if you expect people to agree with you, you might probably want to start by trying to agree with that.

[00:30:24] Yes. Yeah. We see quite a lot of this in research where we get where there’s quite a lot of work done on rater validity and reliability during tests and things where we’ve got a lot of observers, either watching something or doing interviews. And there’s a lot of effort placed on trying to increase the rate of validity and reliability.

[00:30:44] Just in that context. And for those of your listeners who are familiar with the terminology, noise is just the lack of reliability. Yes. Inter-rater reliability or inter-rater reliability, depending on the site. [00:31:00] Yes, no. It’s one of the things that you do talk about in the book is this idea of deep biasing and, you see quite a lot of claims from kind of consultants and some educators that they can remove bias through educational and training processes.

[00:31:17] What are your thoughts about that? We are quite The prudent about the potential for devicing. We review some of the research that has tried to de bias people. We find that not much of it has produced a lasting results. Not much of it has produced results. That translate from one.

[00:31:39] Fields to another and not much of it has produced results on many biases, the problem, the problems with devicing or many for first, for all those reasons. It’s very hard. Second. Even if you could do bias yourself on one particular bias, it’s actually quite hard to know which bias to. [00:32:00] And this is one of the, one of the main reasons why we think for most real organizations, noise is a bigger problem than bias.

[00:32:11] It’s very hard to know in which direction biases are going to pull. You take a classic and simple example. You make an acquisition. That means you’re going to reallocate some resources from your core business to that acquisition that you are, or you’re trying to launch a new business. Let’s make this even simpler.

[00:32:30] You try to launch a new business. Are you victim of status quo, bias and deciding not to launch the business? Or are you a victim of overconfidence when you decide to launch the business? Now, after the fact we can wait until it’s a success. If it fails, we’ll say here we’re overconfident. And if it succeeds we’ll say the people who didn’t do it unlike you were victims of status, school bias and inertia and their resource allocation and some cost [00:33:00] that’s very easy to do after the fact before the fact.

[00:33:05] Excellent. How do you know which bias to come back? We find it very hard to give any easy prescriptive advice on how to fight bias in those kinds of complex decisions to add some nuance to that. There are of course situations, some situations in which the general direction is bias is well known.

[00:33:29] If you are making a plan. We know the planning fallacy will probably lead you to be overconfident, not under conflict. If you are anchored on something on some price, we know that’s a pretty strong bias. But again, if I don’t know on what numbered on one number you’ve been anchored, I can predict the direction of your bias.

[00:33:51] So there are a lot of these biases to the extent that they’re unpredictable, that their direction and their magnitude varies from one [00:34:00] person to the next are going to produce results that are essentially unpredictable. And that are not what we do say that you can do about biases is. Watch or try to watch the decision process in real time, have someone who observes the decision process and see if in the right organizational climate and with the right senior sponsoring, that person may be able to alert you to biases in your decision-making process in real time, as it unfolds.

[00:34:39] If we have a hope for the biasing, it is that it is not to say let’s identify the biases that we have and overcome them as individuals, because that doesn’t work. It is not to say let’s identify the biases that we have as an organization. And that have led us to make bad decisions, because it’s probably [00:35:00] not easy to identify what those biases are.

[00:35:03] It’s to have someone who is in a position to observe the decision process. In real time and to say, aren’t we victims of anchoring here. Aren’t we victims of overconfidence here and to keep people a little bit more honest in the. Yes. I really liked that idea of having a kind of a bias observer. I know a process observer and some people call him or her bias Buster, which is another, which I find quite fun.

[00:35:37] Yeah, this is an absorber. Is the problem we think, just imagine it on people’s passport bias, pasta. That’s brilliant. Okay. Just for the time and everything I just like right at the end of the book, you asked a really interesting question. I’d like to turn that question around on to you. And the question is, what would the less organizations.

[00:35:58] Less noisy [00:36:00] organization be like, so I present that to you. We we present this in a sort of aspirational way in a sort of, I have a dream way. Not nearly as inspiring as the dream that evokes, but we keep saying, noises about. We do realize that noise is not going away and that some of the remedies to knows we haven’t talked about that have downsides.

[00:36:26] And that’s you can’t say let’s bring those down to zero everywhere that would make for an entirely. In human world where everything would be able to automated, then, it’s it would be hell in many ways. So that should not be the aspiration. The aspiration should not be to be to live in a noise, less world, but to live in a less noisy word world, in which some of the most important decisions are made in a somewhat more disciplined way and getting there.[00:37:00] [00:37:00] Takes no he’s on audits. As we described takes it education in the sense that a lot of people, when they hear about noise, But, it’s beautiful. It’s human nature. It’s wonderful. And it’s only when they realize is it wonderful when it’s your doctor who gets it wrong, that they realize that, this romantic idea that diversity and human judgment is something great.

[00:37:23] May not always be well advised. So it’s going to take some work to get there, but our hope is. As people become aware of how much noise there is and how detrimental it is, they are going to put in place. Some of those measures that we’re talking about, you prudently and carefully and making sure that they don’t overdo it and do more harm than good.

[00:37:50] And that this is going to result in systems in general, being a little bit more reliable, a little bit more predictable, a little bit more, [00:38:00] just. And that this will say. A lot of money, a lot of misery and some lives and bring about hopefully more accurate judgments between people within a system and that the systems making better judgments.

[00:38:16] And and as you say Cru in the cockpit, you want them making accurate judgments. If something goes wrong and fast, that would be nice. I’d quite like that. Thank you. Thank you so much, Olivia. I how can people contact you? Thank you, David. This was fun and thanks a lot for the opportunity.

[00:38:32] People can contact me on LinkedIn. That’s the easiest way. Or through my email at HTC barriers, which is easy to find it’s last name at HTC dot. Yeah, we’ll put links to all of that in the show notes, everything. So this is the book noise Floyd and human judgment published by Harper Collins and it’s out now.

[00:38:51] It’s a really excellent and practical evidence-based read, which is why we’re so interested in it. And fully recommended [00:39:00] it. It’s very good read and certainly made me think it was great. Thank you very much, Lydia. Thank you so much.


Be impressively well informed

Get the very latest research intelligence briefings, video research briefings, infographics and more sent direct to you as they are published

Be the most impressively well-informed and up-to-date person around...

Powered by ConvertKit
Like what you see? Help us spread the word

David Wilkinson

David Wilkinson is the Editor-in-Chief of the Oxford Review. He is also acknowledged to be one of the world's leading experts in dealing with ambiguity and uncertainty and developing emotional resilience. David teaches and conducts research at a number of universities including the University of Oxford, Medical Sciences Division, Cardiff University, Oxford Brookes University School of Business and many more. He has worked with many organisations as a consultant and executive coach including Schroders, where he coaches and runs their leadership and management programmes, Royal Mail, Aimia, Hyundai, The RAF, The Pentagon, the governments of the UK, US, Saudi, Oman and the Yemen for example. In 2010 he developed the world's first and only model and programme for developing emotional resilience across entire populations and organisations which has since become known as the Fear to Flow model which is the subject of his next book. In 2012 he drove a 1973 VW across six countries in Southern Africa whilst collecting money for charity and conducting on the ground charity work including developing emotional literature in children and orphans in Africa and a number of other activities. He is the author of The Ambiguity Advanatage: What great leaders are great at, published by Palgrave Macmillian. See more: About: About David Wikipedia: David's Wikipedia Page