David speaks with David Robson, an award-winning science writer specializing in the extremes of the human brain, body, and behavior. He has previously worked as a features editor at New Scientist and as a senior journalist at the BBC. He is the author of The Intelligence Trap, which has been translated into fifteen languages, The Expectation Effect, and The Laws of Connections.

They talked about:

๐Ÿ” How outdated science can mislead us

๐Ÿ”ฌ The replicability crisis in behavioural science

๐Ÿ“ก How scientific information gets distorted

๐Ÿง  The problem with IQ

๐ŸŒฑ The importance of intellectual humility

๐Ÿ’ก Motivated reasoning and the intelligence trap

๐ŸŽ™ Listen in your favourite podcast player

The Knowledge with David Elikwu - Podcast App Links - Plink
Podcast App smart link to listen, download, and subscribe to The Knowledge with David Elikwu. Click to listen! The Knowledge with David Elikwu by David Elikwu has 29 episodes listed in the Self-Improvement category. Podcast links by Plink.

๐ŸŽง Listen on Spotify:

๐Ÿ“น Watch on Youtube:

๐Ÿ‘ค Connect with David:

Twitter: @d_a_robson | https://twitter.com/d_a_robson

LinkedIn: https://www.linkedin.com/in/david-robson

Instagram: https://www.instagram.com/davidarobson/

Website: https://davidrobson.me/

Books:

The Intelligence Trap | https://amzn.to/4aWELqZ

The Expectation Effect | https://amzn.to/4bkisve

๐Ÿ“„ Show notes:

[00:00] Intro

[02:21] David's career progression from math to social science

[04:42] Outdated knowledge and its impact

[05:40] Evolution of psychological understanding

[07:05] The challenge of replicability in research

[07:52] The complexity of human psychology

[09:30] How good science writing works

[12:50] The essence of transparency and integrity in science journalism

[15:59] What is intelligence and how IQ has been measured?

[18:03] The importance of the general intelligence factor

[19:34] The limitations of general intelligence

[20:51] The advantage of combining high IQ with intellectual humility

[22:52] The misconceptions of success

[23:54] A higher IQ can reveal the extent of one's knowledge gaps

[25:11] Intelligence can cause overconfidence and closed-mindedness

[26:52] Earned Dogmatism vs Intellectual Humility

[28:31] Intelligence doesn't necessarily transfer across domains

[30:04] The phenomenon of Nobel Prize Disease

[32:00] Smart people making dumb mistakes

[33:25] The impact of earned dogmatism on critical thinking

[34:56] What is motivated reasoning?

๐Ÿ—ฃ Mentioned in the show:

University of Cambridge | https://www.cam.ac.uk/

University of Oxford | https://www.ox.ac.uk/

Dan Ariely | https://danariely.com/all-about-dan/

Thinking, Fast and Slow | https://amzn.to/44q2FZD

Lizard Brain | https://www.psychologytoday.com/gb/blog/where-addiction-meets-your-brain/201404/your-lizard-brain

Dunning Kruger Effect | https://en.wikipedia.org/wiki/Dunningโ€“Kruger_effect

The Language Instinct | https://amzn.to/4dktX7S

Daniel Kahneman | https://en.wikipedia.org/wiki/Daniel_Kahneman

Amos Tversky | https://en.wikipedia.org/wiki/Amos_Tversky

Psychological Priming | https://en.wikipedia.org/wiki/Priming_(psychology)

Elon Musk | https://en.wikipedia.org/wiki/Elon_Musk

Bill Gates | https://en.wikipedia.org/wiki/Bill_Gates

Steve Jobs | https://en.wikipedia.org/wiki/Steve_Jobs

Philip Tetlock | https://en.wikipedia.org/wiki/Philip_E._Tetlock

Nobel Prize Disease | https://en.wikipedia.org/wiki/Nobel_disease

Kary Mullis | https://en.wikipedia.org/wiki/Kary_Mullis

Polymerase Chain Reaction | https://en.wikipedia.org/wiki/Polymerase_chain_reaction

Arthur Conan Doyle | https://en.wikipedia.org/wiki/Arthur_Conan_Doyle

Sherlock Holmes Books | https://amzn.to/4bjJFyo

Harry Houdini | https://en.wikipedia.org/wiki/Harry_Houdini


๐Ÿ‘‡๐Ÿพ
Full episode transcript below

๐Ÿ‘จ๐Ÿพโ€๐Ÿ’ป About David Elikwu:

David Elikwu FRSA is a serial entrepreneur, strategist, and writer. David is the founder of The Knowledge, a platform helping people think deeper and work smarter.

๐Ÿฃ Twitter: @Delikwu / @itstheknowledge

๐ŸŒ Website: https://www.davidelikwu.com

๐Ÿ“ฝ๏ธ Youtube: https://www.youtube.com/davidelikwu

๐Ÿ“ธ Instagram: https://www.instagram.com/delikwu/

๐Ÿ•บ TikTok: https://www.tiktok.com/@delikwu

๐ŸŽ™๏ธ Podcast: http://plnk.to/theknowledge

๐Ÿ“– Free Book: https://pro.theknowledge.io/frames

My Online Course

๐Ÿ–ฅ๏ธ Decision Hacker: http://www.decisionhacker.io/

Decision Hacker will help you hack your default patterns and become an intentional architect of your life. Youโ€™ll learn everything you need to transform your decisions, your habits, and your outcomes.

The Knowledge

๐Ÿ“ฉ Newsletter: https://theknowledge.io

The Knowledge is a weekly newsletter for people who want to get more out of life. It's full of insights from psychology, philosophy, productivity, and business, all designed to make you more productive, creative, and decisive.

My Favorite Tools

๐ŸŽž๏ธ Descript: https://bit.ly/descript-de

๐Ÿ“จ Convertkit: https://bit.ly/convertkit-de

๐Ÿ”ฐ NordVPN: https://bit.ly/nordvpn-de

๐Ÿ’น Nutmeg: http://bit.ly/nutmegde

๐ŸŽง Audible: https://bit.ly/audiblede

๐Ÿ“œFull transcript:

[00:00:00]

David Robson: So that's the other thing that we don't really acknowledge is that if you're super smart, but you're misdirecting your intelligence it can lead you down some very dark paths. So, you know conspiracy theories, sharing misinformation, climate denialism all of these things can occur from people who say have high intelligence. But they aren't very good at appraising the evidence to hand so they don't have those critical thinking skills.

This week I'm sharing part of my conversation with David Robson, who is an award winning science writer specializing in the extremes of the human brain, body, and behavior.

Now, David has previously worked as a features editor at The New Scientist and as a senior journalist at the BBC. He's the author of three awesome books, one of which is coming out soon. So he wrote The Intelligence Trap, The Expectation Effects, and The Laws of Connections.

Now, in this part, you're going to hear David and I talking about [00:01:00] how outdated science can mislead us.

We talk about the replicability crisis in behavioral science and beyond. We talk about how scientific information can get distorted and the importance of good science journalism.

And we talked about the problem with IQ and how it's measured and how intelligence can often fail us or even be a trap. Then we talk about the importance of intellectual humility and something David calls motivated reasoning.

Now, this was a really awesome episode, and don't forget to look out for the other parts of this that will come out later.

You can get the full show notes, the transcript, and read my newsletter at theknowledge.io and you can find David online on Twitter @d_a_robson.

Now, I wanted to say a very special thank you to everyone that has left a review so far for this podcast, from any episode, please do feel free to leave a review wherever you listen to podcasts. It helps us so, so much.

But seriously, [00:02:00] leave a review. Let me know what your favorite episode was, something that you've learned. It helps us a ton and it means that I can keep looking for ways to improve the podcast and find even better guests.

So if you love this episode, please think of one friend that might also enjoy it and share it with them.

David's career progression from math to social science

David Elikwu: The first thing I wanted to ask you actually is about your career so far, because, funnily enough, I think that there would be some people that might say you're kind of regressing in terms of your commitment to hard science, considering that you studied maths at Cambridge.

David Robson: Yeah.

David Elikwu: You've kind of gone from, you know, doing maths at Cambridge to maybe doing things that are more related to economics and behavioural science.

And I know that very often people talk about, oh, you know, pure maths or pure physics or something like that. And then you go down to economics and then down to social sciences.

David Robson: Yeah, I mean, I suppose I have to say I do have some colleagues, who might think I've sold out a little bit by doing kind of, [00:03:00] psychology and then social psychology in particular. You know, obviously I do think that's quite unfair. It's kind of besmirching

what's actually turning into a very robust discipline. You know, I think social psychology is such an exciting area of research now. But yeah, you know, I started out with maths, so I was interested in kind of Theoretical physics during my degree, but also medical statistics. And that's really what I've brought with me to my career as a science journalist is that kind of rigor and understanding, you know, what scientific papers are telling us, how to read the data, you know, what to believe, what not to believe.

So they're not as, there's not such a disjuncture as it might first seem

David Elikwu: And funnily enough, I mean, as I've grown older, I have appreciated maths more and more. Not that I actually want to do it. So funnily enough, your journey kind of encompasses both myself and my dad's where my dad did study maths at a number of universities. [00:04:00] And, you know, I think he had an offer at one point to do further research at Oxford and I don't know what further research in maths actually means, but practically speaking, I think, yeah, like I hadn't been super interested in maths early on and I probably gravitated more towards, I studied economics and then I've been interested in psychology and behavioral science, et cetera.

So a lot of that makes more intuitive sense to me, but I think we're also at a very interesting time in the progression of some of those fields where, and maybe we can talk a bit more about this later, but, you know, you have things like the replicability crisis and I think with some institutions, they're going through a period where you suddenly people are questioning, Oh, how much can we trust these studies that we've, come to rely on?

Outdated knowledge and its impact

David Elikwu: Maybe actually, this is a decent segue to the intelligence trap. I think one that, one that came to mind, so obviously a lot of the work of Dan Ariely is probably a decent example, because he's been in the news relatively recently.

Also, one of the examples that comes to mind most for me is something like thinking fast and slow, which [00:05:00] is a great book that I've now read. It's also one of these books where. I think at the time it came out, it used a lot of science that was true at the time. And then since that time, we've discovered new things about psychology, about the brain. And so now some of the information in it is outdated, but that book has proliferated so much that so many people have read it. So people will continue talking about the lizard brain and ideas like this, ideas like that.

And so there is an idea, and I think I've heard you talk about the Dunning Kruger effect before, but, you know, there's a sense in which people have a false idea. Not intentionally, but a false idea of how much intelligence they really have. And people think they know things because they've read certain books, but the books themselves may be out of date. And so you're relying on something that no longer exists in a sense.

Evolution of psychological understanding

David Robson: Yeah, totally. And I mean, this goes way back. So one of my kind of interests has been looking at the psychology of language. And you know, people are still reading Steven Pinker's The Language Instinct. And so much of that is out of date, as it would be, you know, it was science progresses. And it's now been about 30 years [00:06:00] since that came out, it's very natural that our understanding of the brain would have evolved in that time.

Similarly, like with thinking fast and slow that you mentioned, what I think is interesting with that is that actually a lot of the behavioral economics, especially, Daniel Kahneman's work with Amos Tversky is that it's, that has been really replicable. Like, people don't really question a lot of those biases. But you know, Daniel Kahneman didn't just rely on his own work. He also looked to work on the idea of psychological priming, for example, where, you know, we can be kind of subconsciously nudged in one direction or another, you know, with small changes to our environment. And that has been less well reproduced, although there's still maybe a debate over the you know, just how dire the state of that field is.

But yeah, science moves on, books aren't often updated, and I think that's the way the publishing industry should really change, essentially, with these best selling books. It'd be great if people could issue new editions that just kind of clarify some of those [00:07:00] areas that haven't been so successfully reproduced or where the kind of basic ideas have moved on.

The challenge of replicability in research

David Elikwu: Yeah. It's a hard one, actually. Cause funny enough, just as you were saying it, I do remember reading another article about how it's possible that the replicability crisis has been a bit overblown because actually there were some studies that came out that seemed to disprove some of the previous studies, and then it turns out those studies themselves were bad. And so actually we've kind of come back full circle to revalidating some of the, the earlier work as well.

And I think. Like you're saying, I think that's one of the difficulty with books and studies in a sense are kind of marked by a moment in time at which they're published. And it's very difficult as ideas continue to propagate and move through the world.

It's hard to figure out when to go back and update potential beliefs or assumptions. And if you don't know that there is an update thing, you don't know if you should look for it or not.

The complexity of human psychology

David Robson: Yeah, I totally agree. And yeah, like, especially, I think that's what's often neglected even, it almost is a Dunning [00:08:00] Kruger effect, actually, where, um, people might have heard of the replicability crisis. And they kind of know that phrase, but they're not keeping track of it.

And like you say, just because one study fails to replicate a phenomenon, it doesn't necessarily mean that the phenomenon doesn't exist. That study might have been badly conducted, or just through statistical fluke, it might not have produced the same results. So you actually have to repeat it again and again and again.

And we do see with some phenomena, you know, where this has happened, and actually, like you say, the original phenomenon actually it's quite robust over the long term, even if there were some short term kind of wobbles and people's faith in that.

There probably will never be with any scientific idea, total 100 percent confidence. But the way I see this developing is that often, the replications might disprove part of a theory, but not the whole of it. You might find out that some psychological effect applies in one situation, but the context really matters, and then in another situation, [00:09:00] you know, it can't be replicated, and in another it can. So, you know, I think we're getting a much more detailed view of human psychology here. And I love that, actually.

The human brain isn't something like, you know, it's not this kind of mechanical system that has these kind of physical laws that will always occur, you know, no matter where you conduct them. Like, you know, we're very complex. There's so many different psychological forces that are competing that I think it's totally to be expected that this complexity will then emerge in the science as we do more and more research.

How good science writing works

David Elikwu: I actually wanted to ask you one more question, that's kind of on this track that just came up from what you were saying now. I'm interested in what your thoughts are on, how good science writing works or how we can ensure that people have the right ideas about science and what the correct science is, especially considering some of what we've just discussed, which is, I think there's two elements to it.

One is that, the truth is actually does change over time. Like truth is not a fixed state thing, even though, like we were just saying before in the hard [00:10:00] sciences, that might be the predisposition, right? You know, physics is true. Maths is true. This is exactly what the facts are. But actually in some of these elements of, of science, the truth is what we know Right now and actually based on what we then come to know later on that might change or tweak some of what we previously thought to be true.

So I think there's that part of it, but then the other part of it is this idea. And I was writing a bit about this recently that a lot of the information that we consume are all just different types of abstractions and the problem with abstractions, even though they're good for communicating ideas. So an abstraction for anyone listening is essentially just a condensed version of something that is perhaps more abstract or you're abstracting like a core truth or a core idea. And the point is in this context, for example, the core idea the closest you can get to the truth might be the original study. Like, when I say the study, I mean the raw data itself. And then the researcher that is doing that study abstracts it one time to write the paper that describes [00:11:00] what is in their data. But at that point, you're already trusting in the researcher to have done that data analysis well, so they are referencing the truth and they are writing their version of it. There's some stuff that they might miss out. There might be their own personal biases about what they think they should include, what they don't want to include, et cetera. So all of that is already at the first level. Then you go up a level and now it's being published in science magazine or in nature or you know, scientific American. And at this point now, you're going through, oh, there's been a peer review process. There have been editors. Someone has decided what to show the wider masses and what not to show people. And there were papers that might be submitted that they might say, actually, we're not going to put this in this edition and so on. And then you go up a level from there and maybe it's a writer like you, or, you know, it's someone else writing a book. And then they are referencing the study. Um, that came out in this editorial.

And, and so you can see as you go along and then by the end, it's someone listening to me on a podcast and I'm referencing some study that I read about in a book and the person that wrote the book read about it [00:12:00] in the, in the paper, et cetera.

And so, because you have this chain that by the time it gets to the average person, it's been filtered multiple times and I might make mistakes when I reference a study and, and the same might happen at any one of those levels. It's hard to figure out how much truth gets lost as the information gets passed along. And the reality is that most people don't want to read the original study. They're not going to go through the data and figure out for themselves. Oh, is this actually true? Does this actually make sense?

And so, yeah, I just wanted to get your, your thoughts on that, particularly as we, as we, I think, societally. Imagine now we're moving to an age where people might just rely on AI. So in fact, they might not read a book in the first place, or they might not read a blog post in the first place. They might just type a question into an AI and the AI is just going to say, well, this is what I found on a blog and who knows the provenance of that information or how much of it is true, et cetera.

The essence of transparency and integrity in science journalism

David Robson: Yeah, I mean, definitely with the current state of technology relying on an AI for factual accuracy is a terrible idea. But yeah, I think you're totally right.

[00:13:00] When I feel like I'm this kind of middle step between the kind of science and the public, you know, I take that responsibility very seriously. So, in my own writing, I always go back to the original papers. So I don't rely on the press releases. I don't rely even on my interviews with the scientists as my kind of primary source, although I do interview the scientists, but you know, when I'm quoting a, a study, I go back to the paper to check that I understand fully what the methods were, what the results were, and then I'll try to confirm my understanding with the scientists themselves. I'll look at those other secondary sources maybe to get some interesting like, contextual information about, you know, the inspiration for the study, but never the actual kind of phenomenon that I'm describing, not the scientific phenomenon that has to come from the paper itself. And I'll kind of analyze, the results. I'll look at, you know, what the p values were, p values for listeners. That's just a kind of statistical measure that we can use that essentially tells you how, how [00:14:00] likely it is that, the result is really robust and isn't just the result of some kind of random fluke that might have produced the observed results.

So it's very important that you take into account the p values. And, you know, they cover a huge range, but essentially the smaller the p value, the more faith you can have in a result. And I try to discount anything that doesn't have, like, really good p value. So yeah, I'm quite a nerd in that way.

And going back to the, the data that on the analysis that have been presented. And then I think even, you know, with that in mind, I also then try to look at results that have already been replicated multiple times and I try to incorporate the nuances that might come out of that as a kind of part of the storytelling. So, I think there's always a temptation with science journalism. It's to paint an overly simplistic picture. But actually, I think the interest is often in looking at, you know, when does the psychological effect occur and when doesn't it? And, you know, I think readers respond to that they actually have more faith when they can [00:15:00] see that you're kind of paying that due diligence and that you're not this kind of snake oil salesman who's just trying to sell your book rather than present the actual complex truth to them.

And I think people do really appreciate that.

What is intelligtence and how IQ has been measured?

David Elikwu: So [00:16:00] let's talk a bit more about the intelligence trap and the concept of an intelligence more broadly. I would love if you could explain or describe, you know, what intelligence is, and perhaps maybe some of the history of how IQ has been measured or how intelligence has been measured in a sense, because.

Funnily enough, referencing our conversation up until now, I find it such an interesting field where there can be so much data and so many studies, and yet people still draw remarkably different conclusions about what this is and what it's like and how to measure it. And it's fascinating how so many different versions of the truth can exist at once that doesn't really apply in, in some other types of science or some other fields.

David Robson: Yeah, totally. I mean, it's very much open to interpretation. Broadly speaking, I think for most psychologists, they would understand that general intelligence is this kind of underlying processing power that the brain is meant to have. And it's a general intelligence because it's meant to apply to lots of different skills.

And this can be seen [00:17:00] in so many studies. It's one of the best replicated results in the whole of psychology, if not science. It's the fact that in general, when you give these cognitive tests that look at different specific abilities, so nonverbal reasoning, vocabulary, mathematical ability, short term memory. You know, all of these kind of skills that could be seen to be independent. What you do find is that there is a correlation between them for most people and that kind of that factor we call it is meant to be the general intelligence factor. And that seems to correspond to anatomical differences in say, the brains wiring. For people with higher IQ, the brain's wiring just seems to be a bit more efficient, there's kind of more like, I think it's fair to say more long distance connections between different brain regions that can allow messages to be passed across the brain more quickly. And then that can help you to perform those kind of abstract tasks that we've spoken about to learn quickly, to see [00:18:00] patterns quickly. So that, that definitely does exist.

The importance of general intelligence factor

David Robson: Where I think, like, there's a lot of debate as well how important is this general intelligence factor. IQ definitely has real world consequences for, say, like, how well people do at school. It's one important factor that can predict academic success, and to a certain extent, success in the workplace, especially in more academic disciplines. So say something like, being a lawyer, being a doctor, for example. So it does matter, but I think like lots of scientists in the past had spoken about, about this as if IQ is the only factor that really matters, or the predominant factor that matters.

And that's where I would disagree with some of those scientists, because I think IQ is important to a certain extent, but so is grit and determination, so is curiosity, so is open mindedness, so is critical thinking, you know, all, talents and traits that you can cultivate yourself. And if you [00:19:00] have a high IQ, but you don't do particularly well on any of those other traits, your IQ really isn't going to be much used to and could actually backfire. So that's the other thing that we don't really acknowledge is that if you're super smart, but you're misdirecting your intelligence it can lead you down some very dark paths. So, you know conspiracy theories, sharing misinformation, climate denialism all of these things can occur from people who say have high intelligence. But they aren't very good at appraising the evidence to hand so they don't have those critical thinking skills.

The limitations of general intelligence

David Elikwu: Yeah, sure. And this is the thing, right? Like general intelligence is obviously extremely important. I think the problem is either in how you apply it or the extent to which it does actually apply and is useful in different contexts, because for example, just going off of what you were saying, I think one of the, the fascinating things is that it's easy perhaps to extrapolate and say, Oh, if you have a higher IQ, of course, you're going to do better at school, of course, you're going to do [00:20:00] better at work or you're going to be better at solving certain types of problems, but I don't think in reality that's always the case.

And the reason is because at work, it depends on the kind of work you do. There are some people that maybe just do science or they just do physics. And the good thing about that is you can lock yourself in an office and just do work by yourself. And actually that makes it extremely easy to only deal with the problem. And so you are just applying your general IQ to a fixed space and it doesn't have as many external variables and that's actually quite fine.

But the reality is for a lot of other people, their jobs involve dealing with people. And actually that's a whole different skillset in terms of how to deal with people, how to deal with things and objects and other things in the real world. And if you are not personable or if you're not good in, in other ways, then that intelligence is not necessarily wasted, but you can't just assume that you're going to be able to maximize your effectiveness in some other domains.

The advantage of combining high IQ with intellectual humility

David Robson: Yeah, absolutely. And you know, this also includes skills like being able to plan. Some people, you know, have really high IQs. They still really [00:21:00] struggle to be able to prioritize their tasks to get done first and what can be left till later. You know, they're not good at time management. They're not good, like you said, at motivating the people around them or getting their colleagues on board to support them in the project but they kind of, want to complete.

And, you know, that's really important in lots of professions where you really do need high intelligence, but some people are just less able to do that.

And I would say, you know, even in academia, we know that there is a pretty high correlation between IQ and academic success, but it's not the whole story at all. And so there was one study that looked at compared people's IQ scores and their intellectual humility. So that is their ability to accept whether they, you know, the limits to their knowledge and you know, whether they might be wrong and then whether they can correct their mistakes.

Now, there wasn't a correlation between IQ and intellectual humility. So actually, you could be super smart and very humble, or super smart and not [00:22:00] humble at all. But what they found was that actually it was the combination of the two that was an advantage. But if you had a lower IQ, but high intellectual humility, you probably still performed much better than the people with just the high IQ, but kind of middling intellectual humility. Like it was such an advantage. It really did, like, raise their academic results. And that's because it's so essential, you know, you can make up for having slightly slower processing. If you're the kind of person who's constantly correcting themselves, finding proactive ways to boost the amount they're learning, who's constantly curious about the world and wanting to kind of fill in those gaps in their knowledge.

So I still think that we've very much overestimated the importance of IQ in our society as a whole, we've really neglected all of these other cognitive traits, that are so important for success in so many different domains.

The misconceptions of success

David Elikwu: Yeah, I think you're right. And I think the interesting aspect is that, I guess as humans, we naturally try to pattern [00:23:00] match and in reality, what makes that difficult is it's hard to distinguish which facts are generalizable and which are individual. And there are a lot of things we try to generalize and there are a lot of things we try to say, Oh, okay, Elon Musk acts like this and he is this successful. That means if I do this, then I will get that.

Bill Gates was really smart and he dropped out of Harvard. So maybe if I do that, then I'll get that. And there's obviously the immediate problem there that you know, making a plan that involves you having to be Elon Musk already kind of invalidates whether or not it will work for you.

But also it's the fact that a lot of these people, Steve Jobs, I think is a quite common example where on one hand he had many brilliant qualities, but then maybe in the personal side of his life from some of the people that have something to say about that, it might not all be great also, but there were a lot of those people that perhaps they are great in spite of those things, rather than those things actually being having some positive correlation.

A higher IQ can reveal the extent of one's knowledge gaps

David Elikwu: And I think just like what you were saying about the correlation between some humility as well. It's actually something I was [00:24:00] thinking about recently in a sense. And I guess it does make sense to me what you're saying that they are uncorrelated because what came intuitively to me as I was thinking about it was that having a higher IQ, and this could be wrong, this is my, just my personal interpretation. But I had thought that having a higher IQ, one of the benefits is that It actually helps you to realize how not smart you are, at least in my personal experience, but perhaps in your, in your book, and I'd love if you could come back on this. I think there's a balance where sometimes there are some people, and maybe this is a little bit of Dunning Kruger, correct? They don't know enough to know what they don't know. And they just have a very simple abstracted view of the world or of a particular thing. And to them, that's all there is. And so if all I know is this, these are all the facts that I have. I can draw an assumption based on those facts. And that is the truth. But actually, if you had maybe some more intelligence and you had more data points to, to match, you might realize that actually there's a whole additional world of stuff. Not only does this person not know, but actually more than perhaps I already know [00:25:00] right now.

And I can draw an inference based on the amount of information available to me now, but there is an even wider world of information that I don't yet know. And that's what came to me, but I think that might not always be the case.

Intelligence can cause overconfidence and closed-mindedness

David Robson: Yeah, I mean, I think you're totally right that the relationship between intellectual humility and intelligence and expertise is really complicated. I don't think there is a simple correlation, but I think you know, depending on where you kind of lie on the spectrum of intelligence or expertise, you might find that your intellectual humility is higher or lower in almost in a U shaped curve.

So what I think the research I've looked at showed is that, say, people with kind of lower education, lower expertise, lower intelligence, you're right, they might have this kind of completely unfounded confidence that relates to this Dunning Kruger effect. They don't know how much they don't know essentially. Like they just assume they'll like ace a test on a subject that they haven't, you [00:26:00] know, even explored at all. Then once you start building up your knowledge, you, you begin to recognize that it's actually a lot harder than you initially thought.

Then you can get to kind of a later stage where you kind of have mastered the discipline And so you probably do have at that point. You know, a good idea of kind of where you stand compared to your peers. But it can lead to this sense of earned dogmatism. You know, you've kind of got your credentials now, you've passed your exams. You start to become closed minded because you, you almost believe that you know everything there is to know about the subject.

And so that can make you a lot less flexible in your thinking when new evidence comes along, because you kind of think you don't need to read it anymore. You assume that kind of if you've got like a degree or a higher degree, you assume that you're, that the field that you studied stopped as soon as you stopped studying it.

Earned Dogmatism vs Intellectual Humility

David Robson: And that is a big problem. Like, you know, there's, I don't think there's any discipline actually that isn't still developing and developing very quickly. [00:27:00] And if you're just going to turn your mind away from all of the kind of new thinking, new evidence, that's a serious problem. And we call that earned dogmatism.

And you can sometimes see it with political pundits who kind of, they might've had some success in the past, and then they assume that, you know, their intuition is just always going to be perfect. They get really entrenched in one kind of, theory of politics, for example, and they just won't listen to any evidence to the contrary.

And so that's how people like Philip Tetlock, when he conducted research looking at how accurate, political pundits are at predicting world events like, election results. He found that they were often no better than chance. And for one subset, when he looked at their linguistic analyses of the words they were using, found that they were especially dogmatic. He found that those individuals, those overconfident individuals, were actually worse than random at predicting the results of elections. So he said, you might as well have had chimps throwing darts at a dartboard to predict like, world events [00:28:00] rather than relying on those points of view.

So yeah, I think there's this kind of U curve that we need to be aware of. But if you have high intellectual humility, I think at any stage of that process of mastery, you're going to have an advantage. Because if you're always aware of the limits of your knowledge, always questioning yourself, always looking for confirmation, for what you do now, and also trying to interrogate and kind of push at those beliefs to find out if you can find any evidence to the contrary. I think in those cases, you're always going to be ahead of your peers, really.

Intelligence doesn't necessarily transfer across domains

David Elikwu: Yeah, and I think the thing is expertise can be a trap in a number of ways. Funnily enough, just as you were speaking, it reminded me of, I think it was a study where they got some people to play a game. I think it was bridge or bowls. Some game that's played on a green where you roll a ball. I think there's a few different ones that are kind of similar.

But the point was they had some group of people that were experts at this game like high ranking people that are really good at this and then some that were complete novices. And they [00:29:00] get them to play and then they change one rule just very slightly, and suddenly the people that are experts they actually play this game professionally are no better than novices and it's because like they are good specifically within this fixed domain, like when they know these rules work like this, then they know how to exceed within those boundaries. If you take away some of those boundaries and suddenly the game is a bit more wild, then the expertise doesn't necessarily transfer.

And I think the same is true of, you've written about this in your book, but it's this idea that very often are intelligence or the expertise that we have doesn't necessarily transfer across domains, but people act like it does. And even within domains, I think that, you know, someone might be specifically a theoretical physicist and then they will ask them on TV about politics or they'll ask them about something else because, oh, this is a smart intelligent person. Of course, they will have some great things to say about this other domain.

And I think because we trust and know that they are smart, at least in some capacity, it's [00:30:00] easy to then assume that they will also be smart about other things. But the reality isn't always so.

The phenomenon of Nobel Prize Disease

David Robson: No, I mean, in fact, it can be disastrous. So, you know, there's this kind of term that I guess it's more popular among science journalists than the general public, it's called Nobel Prize disease.

And essentially that is the fact that there's a, you know, really huge number of Nobel Prize winners who have been the absolute top of their field, had some really crazy beliefs about things that were outside of their field.

My kind of favorite example is this man called Kary Mullis. Who came up with the Polymerase Chain Reaction, you know, which we use in the COVID PCR tests, but which is, you know, really fundamental to a lot of, genetic testing to constructing the human genome to all kinds of medical diagnostics and forensic science. So there's no doubting his genius and apparently that discovery just came to him one day while he was driving down the motorway in California. But then like you read his autobiography and it is [00:31:00] so bizarre. Like he, I'm not dissing people who have these beliefs, but it's not what you would expect for someone who's got background in science. So he really believes in astrology or believed he died a few years ago. He believed in UFOs, he denied climate change, he denied the UFCs were causing damage to the ozone layer. And, you know, we've got really good causal proof that he was wrong there, because as soon as we started using those chemicals stopped using those chemicals, the hole in the ozone layer started to heal. He denied that the virus, HIV caused AIDS, even though, again, you know, there's so much evidence there, good causal evidence that when you treat people with drugs that suppress the virus, they don't develop the symptoms of AIDS.

He seemed to almost because of his brilliance, he would jump to a conclusion and then he was so certain he was right that he would defend it against any criticism, often using all of these kind of logical traps and rhetorical fallacies that weren't nearly as intellectually [00:32:00] rigorous as he thought they were.

Smart people making dumb mistakes

David Elikwu: Yeah. So how exactly does this happen? Because I think you see it in a number of different cases, this idea that people that are extremely smart can also do things that are extremely stupid and they are equally capable of making some really dumb mistakes or something that might seem dumb to someone else or errors that might seem very obvious to someone else who might actually not have that same level of intelligence.

I was going to use Elon Musk as an example again, but people are going to think that I hate him or something, but I think, you know, but in a similar way, right? He is someone that is obviously extremely intelligent, but then you could ask questions about his purchase of Twitter and his views on some certain things that don't always seem extremely consistent. And sometimes he might be talking about something that you would assume he doesn't know about, and maybe he's using the wrong terms or he's saying the wrong thing and you're like, I mean, you know, I, a layman know, know that that's not the case. Why do you think this is true?

So what happens there that, [00:33:00] I guess maybe there's two, there's two forms of it. One is like genuine intelligence where we're talking about, okay, people that are extremely intelligent. Why do they make some of these mistakes? And then the other part, maybe I would make a distinction in just people that are very learned, right? Where you have people that are smart, because they've studied, not necessarily because they just have a super high IQ, but they can also fall into the same traps and they've had great education, but they're making some of the same mistakes as well.

The impact of earned dogmatism on critical thinking

David Robson: Yeah, so, I mean, I guess we've spoken about a couple of phenomena that feed into this, so there's that earned dogmatism, I said, I think if you've had like one big success you know, rest on those laurels, and it stops you being as rigorous and careful and conscientious as you would maybe would have been at the start of your career. But I think here, you know, with all of these examples, what's really happening is motivated reasoning. And so the idea there is that, you know, we often come up with our kind of beliefs often intuitively, [00:34:00] we just have a hunch that something is right or, you know, we're invested in it in some way. It could be something that our parents taught us. It could be, and you know, we don't want to disappoint them. It could just be that we have said something stupid and then we knuckle down because we don't want to admit that we made a mistake for whatever the reason. Once we become emotionally invested in an idea that can really damage our critical thinking capacity. So if we were being totally logical, totally rational, we'd always have an open mind to weigh up the different pieces of evidence that come our way and treat each piece of evidence the same. You know, we would not care about, you know, which side of the debate it kind of falls in. We would just appraise it on its merits. So, you know, how good is this source? How rigorous was that study? How well argued and logical is the argument to kind of come to that conclusion of that report or whatever we're reading. You know, we should treat all evidence out with the same criteria.

What is motivated reasoning?

David Robson: But what happens when we're applying motivated reasoning because we have [00:35:00] this kind of emotional investment in the idea is that we just stop looking at evidence in that even handed way. So we start accepting evidence that is, you know, really, really poor and we turn our intelligence not into this kind of truth finding machine, but we turn it into this kind of justification machine. So we'll accept that really bad evidence and try to defend it and justify it. Then we'll look at a really good piece of evidence that contradicts our belief and we'll use our intelligence machine to tear that to pieces and to justify why we are right, even though there's that evidence right in front of us that shows that we might not be correct. So we're only applying our intelligence just to support and justify our point of view rather than for actually finding the kind of deeper truth behind this.

That's what motivated reasoning is. And yeah, like you say, we could apply that maybe to some of Elon Musk's decisions to that guy Kary Mullis who I mentioned, to lots of politicians it's [00:36:00] undoubtedly relevant.

But my favorite example is the writer Arthur Conan Doyle. He wrote the Sherlock Holmes books that really demonstrate a very good understanding of logical deduction. You know, Sherlock Holmes specifically talks about not coming to a conclusion before you've analysed the evidence, rather than analysing the evidence based on your conclusion. But in his private life, he just didn't seem to apply that at all. So he visited a lot of fraudulent mediums you know, gave a lot of his money to kind of spiritualism, even when all of his friends were telling him how these people were kind of constructing their shows, you know, despite not having any paranormal abilities. He, he fell for this crazy hoax, where these schoolgirls created these photos of alleged.

You know his books, the Sherlock Holmes books really show us that he had this amazing understanding of logical deduction. It's almost like a guidebook to how to be a rational thinker. But in his private life Arthur Conan Doyle was really illogical about, you know, quite a few issues.

He had this kind of [00:37:00] huge faith in spiritualism that led him to throw loads of money at, these kind of mediums who were basically committing fraud and, you know, his friends were trying to persuade him how, they were telling him how they were performing their tricks and, you know, it was definitely a hoax.

The same with when he fell for this cottingly fairies hoax, when these schoolgirls claimed to have photographed fairies at the bottom of their garden. You know, you could see the photos were kind of smart, but they weren't especially convincing. Like, you could see where they'd stuck the pins and you know.

Again, his friends, like, tried to tell him don't make a fool of yourself here. But he, he didn't listen to them, and he just used his knowledge and intelligence to just justify those beliefs. You know, he was drawing on like the new theories of electromagnetism to try to justify, you know, why the fairies could be seen in some cases by these girls, but not in others, even though it, scientifically, it just didn't make sense what he was saying.

He had this kind of furious argument with illusionist Harry Houdini, who, you [00:38:00] know, really kindly, I think, tried to explain to him, like, how stage magic works and why these mediums were just doing exactly the same things as you know, what he was doing in his shows. Arthur Conan Doyle didn't just, like, disbelieve Houdini, but he actually claimed that he must be a fairy being himself, and was just, and was trying to cover his tracks.

Like he was really like a very creative person, but that creativity, he was just pouring into this justification and he ended up fooling himself, not many other people, but he was fooling himself in all of these.

So that's an example of motivated reasoning. For some reason, maybe it has to do with the death of his child, and he wanted to believe that there was an afterlife where he would be able to meet that child again. He was so emotionally attached to these ideas that he failed to use his own powers of deduction to kind of see the truth there, even when everyone around him could do.

David Elikwu: Thank you so much for tuning in. Please do stay tuned for more. [00:39:00] Don't forget to rate, review and subscribe. It really helps the podcast and follow me on Twitter feel free to shoot me any thoughts. See you next time.

Share this post