David speaks with Liron Shapira, Founder&CEO of RelationshipHero.com, a relationship coaching service with over 100,000 clients.

Liron is a technologist, rationalist, and serial entrepreneur, whose skeptical takes about crypto and other bloated startups on BloatedMVP.com have been read over a million times.

If you wanted an opportunity to dig into everything that is at the frontier of technology right now, then this is the episode for you.

🎙 Listen in your favourite podcast player

The Knowledge with David Elikwu - Podcast App Links - Plink
Podcast App smart link to listen, download, and subscribe to The Knowledge with David Elikwu. Click to listen! The Knowledge with David Elikwu by David Elikwu has 29 episodes listed in the Self-Improvement category. Podcast links by Plink.

📹 Watch on Youtube

👤 Connect with [Guest]:

Twitter: @liron | https://twitter.com/liron

Website: RelationshipHero.com | http://relationshiphero.com/

📄 Show notes:

0:00 | Intro

03:16 | Exploring Computer Science and Rationality

05:37 | Overcoming the biggest obstacle to rational thinking

07:43 | Two facets of rationality

10:18 | Rational decision making

18:37 | Angel investing: lessons learned and insights on coinbase

21:46 | Criteria for Angel investments

25:08 | The importance of specificity

30:32 | Why Axie Infinity failed

33:51 | Balaji’s reality disruption field

36:31 | Dissecting the idea of disruption

38:34 | Why you shouldn’t follow investment trends

40:09 | Making the case for Blockchain and NFTs

41:53 | Making better decisions

46:47 | Do corrupt countries need Web3?

52:14 | Do you need mental models?

53:26 | What’s the deal with AI?

56:49 | Exploring the future of AI

59:42 | Turing completeness and its implications

01:01:16 | The optimistic future of an AI-enabled World

01:02:48 | What happens when AI takes all the jobs?

01:06:14 | The case for techno-optimism

01:09:53 | The future of VR and AR

01:15:28 | How technology will shape the future

🗣 Mentioned in the show:

Quixey | https://en.wikipedia.org/wiki/Quixey

Les Wrong Sequences | https://www.lesswrong.com/tag/sequences

Predictably Irrational | https://amzn.to/41kE4U4

Dan Ariely | https://danariely.com/

Paul Graham | http://www.paulgraham.com/

Great Filter | https://astronomy.com/news/2020/11/the-great-filter-a-possible-solution-to-the-fermi-paradox

Robin Hanson | https://en.wikipedia.org/wiki/Robin_Hanson

The Fermi Paradox | https://www.space.com/25325-fermi-paradox.html

SpaceX | https://www.spacex.com/

Axie Infinity | https://axieinfinity.com/

Helium | https://www.helium.com/

Wifi Coin | https://morioh.com/p/98a74f3fd8c3

LoRaWAN | https://lora-alliance.org/about-lorawan/

LongFi | https://www.data-alliance.net/blog/longfi-wireless-technology-of-the-helium-network/#:~:text=LongFi

Mark Andreessen | https://twitter.com/pmarca

Andreessen Horowitz | https://a16z.com/

BlackRock | https://www.blackrock.com/corporate/global-directory

Chris Dixon | https://cdixon.org/

Balaji Srinivasan | https://twitter.com/balajis

a16z | https://a16z.com/

Nasim Taleb | https://twitter.com/nntaleb

NFTs | https://www.theknowledge.io/nfts-explained/

Ideological Turing Tests | https://www.econlib.org/archives/2011/06/the_ideological.html

Bitcoin | https://bitcoin.org/en/

Lindy effect | https://en.wikipedia.org/wiki/Lindy_effect

The Oracle Problem | https://blog.chain.link/what-is-the-blockchain-oracle-problem/#:~:text=decentralized

Hollow Abstraction | https://twitter.com/liron/status/1464219456918413313

Machine Intelligence Research Institute | https://intelligence.org/about/

Gary Marcus | http://garymarcus.com/index.html

Steve Wozniak | https://www.britannica.com/biography/Stephen-Gary-Wozniak

Luddites | https://www.historic-uk.com/HistoryUK/HistoryofBritain/The-Luddites/

GitHub Copilot | https://github.com/features/copilot

Mike Maples Jr. | https://twitter.com/m2jr

Palmer Luckey | https://twitter.com/palmerluckey

Oculus | https://www.oculus.com/experiences/quest/

Elon Musk | https://twitter.com/elonmusk

Neuralink | https://neuralink.com/

Ready Player One | https://en.wikipedia.org/wiki/Ready_Player_One_(film)

Google Glass | https://www.google.com/glass/start/

General Magic | https://en.wikipedia.org/wiki/General_Magic


👇🏾
Full episode transcript below

👨🏾‍💻 About David Elikwu:

David Elikwu FRSA is a serial entrepreneur, strategist, and writer. David is the founder of The Knowledge, a platform helping people think deeper and work smarter.

🐣 Twitter: @Delikwu / @itstheknowledge

🌐 Website: https://www.davidelikwu.com

📽️ Youtube: https://www.youtube.com/davidelikwu

📸 Instagram: https://www.instagram.com/delikwu/

🕺 TikTok: https://www.tiktok.com/@delikwu

🎙️ Podcast: http://plnk.to/theknowledge

📖 EBook: https://delikwu.gumroad.com/l/manual

My Online Course

🖥️ Career Hyperdrive: https://maven.com/theknowledge/career-hyperdrive

Career Hyperdrive is a live, cohort-based course that helps people find their competitive advantage, gain clarity around their goals and build a future-proof set of mental frameworks so they can live an extraordinary life doing work they love.

The Knowledge

📩 Newsletter: https://theknowledge.io

The Knowledge is a weekly newsletter for people who want to get more out of life. It's full of insights from psychology, philosophy, productivity, and business, all designed to help you think deeper and work smarter.

My Favorite Tools

🎞️ Descript: https://www.descript.com?lmref=alZv3w

📨 Convertkit: https://convertkit.com?lmref=ZkJh_w

🔰 NordVPN: https://go.nordvpn.net/SH2yr

💹 Nutmeg: http://bit.ly/nutmegde

🎧 Audible: https://www.amazon.co.uk/Audible-Free-Trial-Digital-Membership/dp/B00OPA2XFG?tag=davidelikw0ec-21

📜Full transcript:

TK - Liron shapira

Liron Shapira: I think a nuke is a good mental model because an AI, once it's like doing its thing, there really is no off button. Like once it's commandeered a bunch of computers and the algorithms just is churning away. There's no reason to think that humans have the power to go to all the instantiations of AI and be like, you turn off, you turn off. So a nuke can be really destructive, but it has a finite set of fuel and it doesn't add more fuel as it burns. Whereas an AI keeps adding fuel, like there's actually no firewall that will stop an AI explosion. So I feel like this risk is being underestimated right now.

David Elikwu: Hey, I'm David Elikwu. And this is The Knowledge. A podcast for anyone looking to think deeper and work smarter. In every episode I speak with makers, thinkers, and innovators to help you get more out of life.

This week I'm speaking with Liron Shapira who is a technologist, rationalist, and serial entrepreneur. He's currently the founder and CEO at Relationship Hero, and we had a jam-packed conversation. If you wanted an opportunity to dig into everything that is at the frontier of technology right now, then this is the episode for you.

So Liron and I talked about his very first startup that raised 170 million and blew up and failed completely. But how he pivoted from that, all the lessons that he learned to building his current startup relationship hero.

We talked about how he got into the rationalist community and all the things that he thinks about AI, the pros and the cons, the future prospects for a world that is enabled with AI, whether it's going to kill all of us or not.

We also talked about the pros and cons of Web3, Crypto and Blockchain, and whether those are really promising technologies or not. And I think one of the ongoing threads that you're gonna hear us talking about are concepts like specificity and hollow abstractions that Liron writes a lot about, and really it's just this idea of going beyond the abstract and not just thinking of these as general concepts and ideas that, oh, this would be great, or all this would work. But if you actually dig deep into a lot of these ideas as we do in this episode, then it will help you to understand what's real, what's fake, so we also talked about this idea of the metaverse and Augmented Reality, Virtual Reality. What are the aspects of that that are promising? What are the aspects of that that might just be a bit of hype?

So this is a really engaging episode about the frontier of technology in various respects. And not just what we can love and appreciate about how technology is developing, but also how we can think critically about the ideas we're being presented with.

So you can find Liron on Twitter @liron and you can get the full show notes transcript and read my newsletter@theknowledge.io.

Every week, I share some of the best tools, ideas, and frameworks that I come across from business psychology, philosophy and productivity. So if you want the best that I have to share, you can get that in the newsletter at theknowledge.io.

If you love this episode, please do share it with a friend, and don't forget to leave a review wherever you listen to podcasts because it helps us tremendously to reach other people just like you.

I was looking at your writing and thinking about a lot of your background and I think two main things that underpin a lot of the things that you talk about are computer science in the sense of what it interacts with and then rationality.

So I'd love to know from your perspective, how did you come to those two places? Like what got you interested in computer science? What got you into this idea? The concept of rationality and digging deeper into that community, 'cause there's a whole community around it as well.

Liron Shapira: Yeah. Well I'm really into rationality in computer science. I think you, you really got my number on that. You know, it's just been like a lifelong obsession for me. I was always just like very nerdy. I loved just like, you know, being in my head thinking about chains of logic. When I first learned that you could program a computer, it was actually from like a library book that had examples in code. I'm like, what's going on here? You're like typing stuff in that makes the computer do stuff. What? And so of course I ran home and asked my dad, you know, what, how does this work? And my first program language was Basic using examples from that book, and I'm like, oh man, I was like in heaven, right? So it's, it was just really a good personality fit for me. And then I studied computer science in college. I studied a lot of math and you know, mega mathematics. It's this branch of math where it's like, you know, how do you formally encode a proof? And what does that mean to prove something, you know, how do computers prove things?

So yeah, like I combined math and computer science, and then I got into LessWrong you know, written mostly by Eliezer Yudkowsky. It's called The Sequences. I'm not sure if you're familiar with the Less Wrong Sequences.

David Elikwu: Vaguely I've come across less wrong, but feel free to explain more.

Liron Shapira: Yeah. So it's this giant, multiple thousands of pages. This corpus of basically like how to think, like how to operate a human brain to try to approximate the level of like, a an ideal reasoner built from scratch. So it's kind of like taking an AI lens to the whole philosophy field, and it's like, okay guys, we're not, it's not just about like, what feels right here. It's like we gotta build an AI from scratch here so we better understand, you know, what it really means to think logically. So I thought that was like a very powerful fresh approach to philosophy to kind of turn the AI lens to philosophy. And to that, that's just been highly influential on me. So in addition to like my computer science background understanding the rationality sequences and you know, redoing philosophy from the AI mindset, it's just, it's completely rewritten how I think about everything. Like now I'm like, okay, I'm just like a brain, right. I'm like a type of algorithm. I have like some flaws that we know about. And I have some ideas on like how I would rewrite myself as a better algorithm if I could. That's kinda like my underpinning of everything I do.

David Elikwu: Okay. I mean, I'd love to dig into that. In a very basic way. What would you say is the, the biggest distinction that, how do people not think rationally? What was the biggest bridge to cross, either for you personally or maybe for the average person that stops us thinking in the most rational way.

Liron Shapira: Yeah. So, you know, we don't think rationally, so first of all, you gotta give humans credit, right? Like a lot of the stuff we do is rational, right? So if you're just going to the store and buying some nice Snapple you're probably doing a pretty good job with that, right? And like your eyes are doing a lot of work telling you in 3D right? All the objects around you and you're planning like a route and you're giving the cashier the right amount of money. So like a lot of stuff is going right and that stuff is rational.

And so the question is like, where does it break down? Right? So like the same mechanisms you have that are like giving you this really accurate picture of how to get a Snapple, right? Like at what point when you're thinking about like Aliens or ESP or like, at what point does it break down or when you're thinking about like free will. And so, you know, people like common ways that it starts to break down. One is that like, humans tend to think that things are like inherently mysterious. So there's a mode that humans go into where like, okay, they're buying the Snapple, but now they have to think about like, the beginning of the universe.

And then they're like, there's a really strong temptation to wave your hands and be like, well, that's a mysterious phenomenon. There's two types of phenomenon. There's the ordinary stuff and there's the mysterious stuff. And people tend to, they have like a switch where they, they like, okay, let's not be logical about this because it's beyond the realm of logic. And so people naturally don't realize how far you can go with logic. And the reason that is, is because if you look at humanity like a million years ago, or if you look at an ancient tribe, it was like so hopeless to try to use logic to try to reason about that stuff. It made a lot of sense to just believe that you can't, because it's not that you couldn't just that it was ridiculously hard. And then today we've made a ton of progress. Like actually we know quite a lot about the evolution of the universe. And like you can really hold off on the sense of mystery and you can just like use pure logic to understand, you know, mechanistically what's going on with your cells, right? Like with your feelings, you can use evolutionary psychology, you can use logic to talk about your feelings. And so logic is kind of like creeping through everywhere. But most people still haven't like gotten the memo, right? They're like, oh yeah, I'm logical about my Snapple, but also Mercury is in retrograde, right? And I also like believe in Astrology, right? So most people kinda like shift gears.

David Elikwu: Yeah. One aspect that I'm interested in, and maybe you can shed some light on this, I think of two. Maybe two wrinkles or two additional facets here. One is I'm thinking of Dan Ariely's book, Predictably Irrational and some of the ways humans act in ways that are irrational, but two good ends. And there can be good outcomes simply by the fact that the ways that we act are intentionally irrational, but also how that can skew our judgment. But then the other aspect is also, I think of something like entrepreneurship as a good example that is often highly irrational, even when the outcomes are positive. And sometimes the idea of taking risk and perhaps unnecessary risk. From a purely logical perspective. I know you can break down all the Maths of the, you know, the expected returns, etc. But often you have people that are taking bets that on the surface, I think Paul Graham talks about this which is ideas that look like bad ideas, but are really good ideas. And so at first glance, this might seem like something not worth pursuing, but actually once you do pursue it and you break some kind of intermediary barrier, then it actually becomes a better idea. And I think another analogy that he uses is that some things look like smooth services, like, ideas look like smooth services, but when you take a deeper look, there are actually like lots of little facets and those are the

Liron Shapira: Yeah. Yeah. He just published that last night.

David Elikwu: Yeah.

Liron Shapira: Yeah, I saw it. It's great. No, I actually, I love the Paul Graham's analogy of the fractal, right? Like when you get up, get up close to something. Yeah, just this might be a little tangent, but I've had the same thought in terms of startups. When you're just a startup trying to go to market, a lot of pitches sound like, it's a smooth pitch attacking a smooth space. Like, oh, we just need 1% of the market and the market is the educational tools for toddlers who have scientists, parents or whatever. So just like the smooth description. But then when you get really hairy, you're like, well, actually the part that really matters is like, is we have the best music backtrack. Or like, there's some random detail that you really can't predict and advance in your pitch, but like, once you get into the weeds, then you're just like, well it's not even like the best music, but it's just like, well, actually what we are good at is the exact mechanism that as the toddler advances through the levels, we know exactly when the levels should go back. So it's like something that'd be really hard and random to pitch, but like, it turns out that like, that part of your operations tends to turns out to be like important. And like in my own business Relationship Hero, there's like random optimizations that I spend my day doing, like Ab tests that I did on pricing. Turns out that was like a high leverage thing that I do. But if that, I worked that into the pitch, like, oh, we're gonna have great Ab tests on pricing. It's like, what are you talking about? Right? So there's like a big difference between like looking from far away and like really just getting into the weeds and like see what turns out to be important.

David Elikwu: Yeah, yeah, exactly. So at what point did you get into this thinking about rationality and how did it influence your decision making? Particularly what I'm interested in is your first startup, you say failed massively, and your current startup is going a lot better. So I'd love to maybe dig into the story of, you know, what it was like building that first startup, what went wrong, and how maybe your mindset and approach has changed. Because I know you evaluate lots of different startups, both as an angel investor, but also just as a commentator in general.

Liron Shapira: Sure. Yeah. So, my entrepreneurial journey, so my first startup right out of college was a company called Quickie, and we were doing search engine technology for App stores. And yeah, in terms of the product, like it worked okay. We had some partnerships, we powered Ask.Com's app search feature. We had a partnership with Alibaba for their app store. And we were just, we got a lot of press for just raising a ton of money. So at the peak we raised like over 50 million of strategic capital from Alibaba, 170 million total over the years. And we didn't really deliver that much, so we kind of raised way more than we deserved. And eventually we didn't have much to show for it. The Alibaba partnership didn't work out, the whole thing just shut down. And yeah, I mean, it was a massive failure, like a lot of value destruction, like destroyed way more value than we created. And so, you know, at the very least I can salvage some lessons, right? It's like from this expensive education, right? A lot of people's money got wasted, but I can salvage some lessons.

And one of the lessons is, sometimes people pump way too much money into something that's not working well enough, right? Like, we got more investment than we deserve. And that lesson has been helpful in my career to be like, okay, there's people who can, like, even if they have millions of dollars, they may still not know what they're talking about. And that lesson was helpful to me to like helping pop the crypto bubble, for instance, right? Like, I don't care, it's a trillion dollar industry. Like it's overvalued.

So, yeah, and then, you know, moving on to my ex company relationship bureau. So I took the lessons I learned and I'm like, look, this has to like be a profitable business, right? That was kind of my constraint going into my current company relationship bureau. And the scale of the fundraising has been much lower. We've raised 4 million and we're currently profitable, but our scale is, pretty modest. We're still below 10 million a year revenue, which is not bad for just a business in general. But it's considered, it's not unicorn level. So it's hard to get both high scale and profitability. So we're still trying to tweak that dial.

David Elikwu: Sure. So you review a lot of startups at Bloated MVP. I'm interested in what you think are the most common mistakes that you see startup founders make, aside from perhaps raising too much money at the outset.

Liron Shapira: Yeah, I mean there's really just one major one that connects almost everything else. I mean, it's insane how common this is. Which is just that, the idea of people don't really understand how they're creating value. Like they're spending so much time at the beginning of their startup doing a lot of things that don't connect into making something somebody wants. Right? And now I'm like, stealing Y C's, catch failures, make something people want. It's really spot on, you know, it's not just a cliche, it's literally like most startup founders just don't make anything that anybody wants. And I always like, my observation is like, this is such a low bar. So it's like a weird situation where you take 80% of startups and they're not passing the bar, and yet the bar is a really low bar. I'm like, what's going on? Why is nobody passing a low bar of making, making something people want? Like, it's not supposed to be hard if you're gonna fail as a startup, A cool way to fail is like, okay, you couldn't make the unit economics work, right? Like the marketing cost was just a little too high, you couldn't make the unit economic work.

But when you fail, because you can't even make something that like a single person wants. What's going on there? So I dug into that and it turns out that what's going on is, it's what we talked about before. It's like the fractal thing. The idea that when you're zoomed out, when you haven't really gone to market yet, when you haven't launched and you're like working on your product, you have this very smooth idea of like, oh yeah, people need this. They need like better analytics and I'm just gonna like, make smarter analytics and I'm gonna put AI in it and it's gonna be better. And then when you finally launch it, what happens is just like nobody cares, right? And then you're like, oh, let me email some people, let me get an email list. And it's just like, okay, your email list, maybe they visit the site and then they leave.

You know what I'm saying? And it's like you just, you never got a person to like, come use the product. It's crazy. And then like, that's it. And then the startup shuts down and they never get like one passionate user. And that's like a typical scenario. And I feel like most people don't know this, like this, this super common failure mode is everybody's walking, they're like lemmings going the same direction because like the news isn't out that this is how startups fail.

David Elikwu: Yeah, I totally agree, and I think you wrote something about it and you referenced it as the Great Leap or something like that.

Liron Shapira: Right, so, there's this concept called the great filter in cosmology invented by Robin Hanson, which is, it's the idea of just like, Hey, you know, the Fermi paradox? There's no aliens. So like there's gotta be at least one step that's really, really improbable in the development from the beginning of the universe.

The formation of a planet all the way to an intelligent civilization, there's gotta be at least one step that's extremely hard because there's so many planets and we only know of one planet with life on it. So where's the filter? Right? And it could be multiple steps, but there's at least one really big filter. And we're not sure what the filter is for human life, like if it's behind us, it's front of us. But my thing is just an analogy I say for startups, what's the great filter for startups? Why isn't every startup a unicorn? Right? Why are there only a few unicorns? And it turns out that the biggest filter step is ridiculously far in the beginning. It's just the step I talk about. It's the step of getting one person to successfully get value from your thing. I believe something like 80% of people who work on something will never get one person to get value outta something. Like they will fail, like at the starting line. Like the starting gun hasn't even gone off and you're already dead.

So it's like a really weird place for the grey filter to be. And what's even weirder is if you explicitly make that your objective, if you're like, okay, startups are so hard, I'm actually going to just explicitly focus on making sure that one person wants this. There's a trick to making something that one person wants, which is you start by picking a person and then you kind of stalk that person and you're like, Hey, give me 10 bucks. Like, what do I have to do to get 10 bucks? And the person can be like, I don't know. Go get me a Starbucks. You got them a Starbucks, you get 10 bucks. You're already past what most startups have done because you did something that somebody wanted. Now is that a scalable idea? I mean, maybe? You kind of, maybe you have a one person version of DoorDash if you can get people Starbuckses, right? So it's not the worst idea. I would argue you're further along toward a good idea. If you at least made somebody's day and got 10 bucks, then if you're like working on technology for a year that nobody ever uses.

David Elikwu: Yeah, that makes a lot of sense. So first of all, I completely agree, and I think taking that a step further, one of the mistakes I think a lot of people make is there's a lot of traditional paradigms that I think people do take to heart in a good sense, but maybe a step too far. So even thinking about what you were explaining is the first filter, I think the next filter is that a lot of people, when they do focus on value, they focus too much on the value that they are extracting rather than the value they're delivering. And so it's not based on empathy at all. It's based on trying to find an idea that is monetizable enough to generate money for themselves as opposed to generating value for users. And so what you miss is the exact interchange where you are providing enough value to get the money instead of just on the broad idea that this is something where money can be extracted. And so this is a makes a good connection to a lot of the web3 space as an example, where you see a lot of people essentially creating problems that can possibly be solved and people can give money for, but there is no actual value. I think maybe the gap is kind of between fulfilling an urge and fulfilling a need.

So I think maybe it's Paul Graham that talks about this, which is, you know, building startups that fulfill one of the seven deadly sins. And so there's, there are a lot of startups that pursue one of those things which pursue urges. And you can perhaps get over an initial barrier where people can generate hype, you can generate hype around something by fulfilling an urge, by fulfilling the need for hype, the need for curiosity, the need for people wanting to make money, the need for sex for whatever. But beyond that, you don't get from the urge to the need. So after the initial hype is gone, there's nothing. And so actually on a broader scale, I was thinking about this earlier. I think what you get is that if you look at a very small window, as an example, 2018 to 2021, you'll see a lot of startups that look successful within that window. But when you take a broader scoop and you say, okay, out of the whole 21st century, give me a list of startups that works. Those don't even exist because they weren't statistically significant in any way. And so in the broader scale though they might as well not have existed because they didn't actually create any tangible value, but they managed to extract some value in the meantime, in the brief window that they existed.

Liron Shapira: Yeah, that's true.

David Elikwu: So you are also an angel investor, I'd love to know, okay, how has that gone in general? And then we can talk specifically about Coinbase, which I think was probably a big win. Perhaps one of your biggest wins. I'm not sure. But yeah, tell me more about that.

Liron Shapira: Yeah. Coinbase was my biggest win which is ironic because, you know, I'm so anti crypto, anti web3, and I think it's so overblown. And I was just incredibly lucky that I invested in Coinbase in 2012, because, back in the day, a lot of rationalists were passionate about Bitcoin because it's like, look, this could be used as a currency, maybe like, probably not. But if it is, the upside's really high, right. And so back in 2012, my first insight as an angel investor, when I even started Angel investing, when I even had any cash in my banking account to Angel Invest. You know, I had this observation that many Rationalists had, which is like, look, there is like a thousand x outcome, right? For Bitcoin, which turned out to actually happen, which is, you know, pretty wild, that kind of thing rarely happens, right? But if it does happen, it's a thousand times return, if not ten thousand. And so, you know, if you do the math, it's like, well, I only need like a 1% chance, right? or like, maybe there's a 10% chance, but then I'll get a thousand x return. So 10% of a thousand. So that's like an expected value of a hundred x return, but I might have to wait a few years for it. But like, it's pretty good, it's pretty attractive if you can make a lot of bets like that. And you're right, that they all have some chance of return. Now over the years I'm like, oh man, Bitcoin, you know, it's not really usable as a currency. Like it has macroeconomic issues, like maybe it is a little bit. So I've become like so much less bullish. And I was just incredibly lucky that it was just hard to sell my steak and Coinbase because I owned Bitcoin and I sold the Bitcoin and I just didn't sell the Coinbase. So, you know, and, and actually I did end up selling more than half of my steak Coinbase.

So it's, it really is just honestly sheer luck that I was able to hold enough Coinbase that I did end up pocketing like a thousand x return, like 10,000 into 6 million, like just insane unheard of return. And yeah, and the fact that I held onto some was totally luck, right? So, It's very sobering, it's humbling as an angel investor to be like, okay, so the only reason I can even say that like my angel investing career has been a success is because of this one incredibly lucky element. And like, you know, and I sold my Bitcoin. Do I have of other investments that are pretty good? I have some that are pretty good, but it's hard to say how much of a win my investing career is as a whole because the time period is also super high, right? So my first investment coin is literally my first investment, beginner's luck. And I just cashed out of that you know, a year and a half ago. So if I do have another slam dunk investment, I would only be cashing it out, either now or, and most of my angel investing was done in just the last few years, right? So maybe in five years I'll have the next Coinbase finally come to fruition. And I do have some companies that are doing quite well, I would say are, like maybe worth 40 times more than what I invested in, right? And 40 is still a long way away from 2000 and they're not liquid yet. But yeah, I mean that's honestly, I have no idea how good I am as an angel investor objectively. And that's probably why I don't angel invest a ton, I only Angel invest when I see a company that's like, I just feel like I need to be involved because I just feel like the team is like so good to be in communication with, or like the product is something that needs to exist. So I've kind of given up on this idea that like, oh yeah, I know what the expected value of this investment is. It's more like when I see enough things that I like about an investment, I try to get involved with a small check because at least I get to like, be part of the journey and like, it just seems like a great journey that's interesting to me.

David Elikwu: Sure. Okay, so outside of personal interest, what are the criteria that you might look for either for a angel investment that you are personally making, but also just a criteria of being a good startup?

Liron Shapira: Yeah. So I have some boxes that I check. So like when I'm doing like a, a Y Combinator mock interview or just like meeting a founder for the first time. The one box is just like, you know, the quality of the conversation, like, is the founder like answering questions in like a direct way? right? Do they sound intelligent? Are we having a high bandwidth communication right now? And if we don't have that, that's gonna make me a lot less interested because even if they're running a good company, it's like, it's hard to even communicate with them. It's like for whatever reason. Yeah, so that's one signal.

And that does rule out like a significant percentage of people that's gonna filter out, I'd say at least half. Okay, there's also the filter where I can judge software execution, right? So if there's any, just by showing me if they can put up some slides or demo of the software, I can judge like, oh, okay, I can kind of tell how they code this. And cause that's kind of my niche, right? Software engineering. So that's like another box I can check. Like, oh, these guys are good at software engineering. Which is great because then it's like, you know, you've got technical founders on the team, so even if stuff goes wrong, you can try to like hack different stuff, right? You can keep doing lean prototypes, you just have more shots on goal when you can kind of like, easily put stuff out there using software engineering skills. So that's another box I could check.

Another box is just like the quality of the value prop, right? Like, are you, is this clearly something that people want, right? So there's some ideas we're like, oh yeah, of course people want this. Another box to check is just, is this a sweetheart deal? Right? And this is just something that like, a slimy or as a rationalist, nerdy, introverted guy. This isn't a part that I like about the industry, but there is this idea of a sweetheart deal, right? Where it's like, oh, I have this connection. You wouldn't even be talking to me. But like you know, met me through a friend where we're in this network and so I get to invest and like, this is clearly a good investment. Like, you have a good reputation. I'm just gonna give you a check cause I trust you and you're not even talking to that many people.

So that's like a free box that I get to check if I think I'm getting a sweetheart deal. And in some cases, just by being like a high net worth individual, you can go on Angel list and and you can subscribe to these, I mean, you don't even have to be high net worth, you can like pass an exam. So if you're just on Angel List and you're getting sent deals, right? So like the, the other day I bought secondary shares in like these, you know, great name companies, right? SpaceX, Stripe, Anduril, I mean these are great, great companies. Did I pay the right price for them? I don't know. Right? Who's to say if the economy crashed, I will have overpaid, right? But it's still, I still consider it a little bit of a sweetheart deal because I think that the average person who knows the name of these companies, like SpaceX for instance, isn't getting that email saying like, Hey, you can invest in SpaceX, right? So it's still a sweetheart deal in that sense, it's not a fully open auction. There's still some restrictions of who even gets to see the opportunity to bid. So that's a type of sweetheart deal. I think there's a couple more flags. I mean, one of them is obviously the traction graph. I mean, the revenue graph is the gold standard, right? So if you're seeing an exponential revenue graph, like investors are gonna throw money at you, and sometimes it's gonna be like BS because it's like, oh, well it's unprofitable, but it's still a major signal. If it's somewhat profitable and it's an exponential revenue graph, I'm gonna be like, now the question is like, why shouldn't I invest? Right? It's like, it totally flips the conversation. So that's a signal.

And then passionate early users, right? So that's even with no revenue, if it's just like a lot of passionate usage or like a lot of retention. So it's like, I'm like a metrics hound like everybody else, but I guess what distinguishes me is that there's just certain areas where I feel like I can have a better prediction of some signs that are even pre traction. I'm like, okay, you have no traction to show. But I can see a lot of things are looking good in the very early stages. Okay. So that's like a rough summary of like my approach as an angel investor, which is not super rigorous or consistent or professional, but it's how I operate.

David Elikwu: Awesome. So one of the first things that you mentioned was this idea of specificity, which you've written about, and I think it comes up in a lot of your objections or critiques of Web3 and Bitcoin in general. So I'd love to know maybe what's your beef with Bitcoin and cryptocurrency maybe in general? And actually maybe this is a better question. I don't know if there is a distinction, but let me know if there is between perhaps what you might be critical about as it pertains to blockchain. As distinct from Bitcoin or cryptocurrency as distinct from maybe like Bitcoin specifically. So they're kind of three different layers which I think are slightly different and might have slightly different uses, but a lot of the time they're all lumped in together. And then there's maybe web3 on top of that as a name, and then NFTs as an additional layer.

Liron Shapira: Yeah. So basically like what is my beef? Be precise about my beef of blockchain. Yeah, so my biggest beef is with this whole idea of web3, and I think a good definition of web3 is blockchain applications other than cryptocurrencies, right? So Bitcoin by itself is not quite web3, Monero not quite web3, Ethereum not quite right. If you just look at Ethereum, not quite web3, but if it's like applications built on Ethereum, like, you know, a Twitter clone somehow built on Ethereum, an Uber clone somehow built on Ethereum, suddenly that is web3. So web3, because it is anything other than cryptocurrency. Now, if it's like it's Uber, but you can pay with Bitcoin, I would argue that's not web3 yet, right? That's just cryptocurrency used as payment. Okay, so with that definition of web3, my beef is that I think web3 is literally an incoherent zero, not like slight value, like literally zero. And it's a zero on the level of just logical coherence. So anytime somebody even explains how web3 supposedly creates value within that explanation is already a logical flaw. Like it's an explanation that is so bad that you don't even need to go to market, you don't need to build anything, you can just, you know, retire the explanation and like admit that you like didn't think right when you, when you made that pitch. And if you look at examples of web3 failures, you can trace the failure all the way back to the initial logic like Axie Infinity.

The reason Axie Infinity failed isn't because they got hacked, although the hack was ridiculous. It's like a 600 million hack. It's not because they got hacked, it's not because they didn't implement it well, it's not cause they got unlucky. It's because on paper they just created a Ponzi scheme. Like, ponzi schemes blow up for a while and then they crash. Like that is what happens. And you didn't have to run the experiment. You could have just looked at the blueprints for experiment and realized it was a Ponzi scheme and realized that blockchain technology had nothing to offer besides implementing a Ponzi. It was just an implementation layer of a Ponzi, which you don't need blockchain technology, you could do a Ponzi on web2.

So that's Axie Infinity and if you look at Helium, it's a similar thing. You know, Helium, the wifi, the LoRaWAN, LongFi, they call it. Right? These, this routers. So people were installing these routers at home in order to earn cryptocurrency and the scheme is called Helium. That also made no sense because if you wanna reward people for having shared wifi, it's a questionable value prop. But if you wanna do it, just put the accounting ledger in a regular database. Like you can pay them with cryptocurrency if you really want, but like, you don't need a decentralized accounting method. You know, you don't need to decentralize the server that tells you how much people owe each other in this network. So it just, the pitches are mind blowingly incoherent. And so the reason I personally got kind of obsessed with dunking on web3 is because like, there was a disconnect between the caliber of the people and the institutions and the capital that we're talking about this idea and how incredibly flawed the idea was at like a basic logical level. It's like the idea should not have passed like a high school business class. And here you have people like Mark Andreessen, the people that they hire at Andreessen Horowitz, Chris Dixon.. And people that if it weren't for the web3 stuff, I'd be like, these are great people. Like I really respect they have like some insights, right? I see them as like, you know, mentors. I respect their successes but they've like completely clowned themselves on this whole web3 thing and it's like still going, right? They still have like 2 billion to deploy and they're 7 billion fund and they're lighting it all on fire. And I'm like, what the hell is going on with web3?

And then just to finish my overview of what my beef is, so then you move on to Bitcoin, which is not exactly web3, right? It's the original value proposition. And with Bitcoin it's not quite as easy to say that the logic is fully incoherent. It is more coherent of like, look, it's just a thing, right? It's a protocol that runs it has some protections against the 51% attack, right? It's like an interesting protocol, the proof of work blockchain. And it's gonna like somehow hold its value and somehow be used for transactions or be used as a store of value. It's logically consistent, but the problem is it's not clear as a matter of like macroeconomics or as a matter of like game theoretical equilibrium, it's not clear that there's any coherent state where Bitcoin can have a high value consistently and not be ridiculously volatile and connect into like a legal, well-functioning part of the world. Like it seems like Bitcoin always has to be like the sideshow, that it kind of undermines itself when it gets too valuable because then it like, the network like freezes up, like it can't transact very much. It seems like Bitcoin can be a number of different things. It can get into a number of different states, but none of the states is like really good and really self-consistent. That's like my issue with Bitcoin, like it's really cool and everybody like wants it to be everything, but like in reality it's hard. It's hard to imagine it being anything successful.

David Elikwu: Sure, so going back to. I think this idea that you mentioned Andreessen Horowitz as an example, but there's a lot of, probably some of the smartest minds in the technology space were extremely bullish about web3. So how did a lot of those people get that so wrong, particularly Axie Infinity was probably one of the most glaringly obvious ones to me. And I'm just wondering, like I am probably not as smart as a bunch of these people. Why does something that looks so obviously flawed to me, not resonate with them in the same way? And is it just this sense that we were talking about, which is that there are, there is some element of building great startups, which requires some irrational optimism or is there something else that I'm missing?

Liron Shapira: Well, I have my own pet theories about why Andreessen Horowitz has gone down this route and like turned evil basically or like turned dumb, dumb and evil, some combination. I have my theories and I just wanna separate that. When I make a claim and say, web3 is logically incoherent and the things that they're deploying capital into, they're doing a disservice to their LPs and they're being irresponsible. Those conclusions I feel very confidently about, I think I'm on very firm ground. Now separately, I can go into speculation about what I'm guessing that they're doing and you can take that with a grain of salt because I don't claim to be able to psychoanalyze them. But here's my attempt.

I think that Mark Andreessen's strategy is to build basically the BlackRock where he's basically just trying to optimize assets under management, like the BlackRock of venture capital. And so the funny thing is for him, the crypto fund is already a success. I mean, I think that they should give up on getting a carry because they're not gonna have a positive return on the fund, so they're not gonna make any 20% carry. But that fund, the crypto fund with the 7 billion, the four different crypto funds, those are walled off from the other funds within Andreessen Horowitz. So it's not gonna take a chunk out of the carry from the other funds. So they still have carry from the non crypto funds and then the crypto fund because they pumped assets under management to 7 billion that means that their share, the 2% per year that turns into 20%, right? So it's 20% over 10 years. So when, when you're taking 20% and it's 7 billion, right? So that's our, like what is 700 million? I don't even know. It's like more than, more than a billion, it's insane. The amount of management fees that they're gonna get on the 7 billion. So that's a win, right? So they're pocketing these management fees while they're running the fund of the ground. They're burning, you know, 40% of the capital is destroyed and they're pocketing hundreds of million, maybe even a billion in a management fees. So from Mark Andreessen's perspective, as long as he kind of stands back and he is like, look, LPs invested in the thesis that they want exposure to crypto, we gave them exposure to crypto, we gave them the board Apes at a 4 billion valuation. That we're the number one at giving them exposure to crypto and we deserve this management fee and like everything's ready. So Andreessen, you know, it's a win. Of course, he is completely destroyed his credibility with somebody like me, or people who think my arguments make sense.

But at the end of the day if somebody is not falling super closely and Andreessen Horowitz wants to invest in their company, are you gonna take the check? You should still consider taking the check, right? Even if the guy is like ridiculous. You know, it's still money. If you retain enough board control, if you feel comfortable with it. So, look, Andreessen's a smart guy and it seems like he, he's a successful guy and he probably knows what he is doing to some degree. I think that his employee, Chris Dixon I think there's more of an issue with, I do think Chris drinks his own Kool-Aid and like a lot of the stuff he says about web3, like, I do think there's a, an intellectual limitation there. As far as I can tell like, I do think that he struggles to process his own claims and like see their incoherence. And so that is my best guess about what's going on with Chris.

David Elikwu: Fair. So incoherence is a strong word, which you've used a few times, but also I think there's a flip side to it. Not so much just as a criticism, but it's very surface level when you look at a lot of the descriptions. So Balaji comes to mind as a good example, I referenced him just on the last episode of this podcast, not in any negative way. I was just referencing one of his ideas and what I find really interesting is that he could talk about something, I have listened to him talk about some of his ideas for probably a grand total of like 16 hours, because every podcast interview that he does is three or four hours each. And I can come away from all of that, and I can't relay what he just said. I can't explain it to you why this thing works, because he's, I mean, he throws in a lot of really interesting stuff, a lot of cultural references. He'll pivot between mathematics and physics and biology, and everyone that speaks to him comes away with this sense of how smart he is. But I still can't explain to you what, you know. He's got this current idea, The network state and yeah. I just can't, I can't explain it succinctly in a way that still resonates and makes sense apart from when he's saying it.

Liron Shapira: Yeah. If you search on my Twitter, I've actually, people were wondering why I had a few months where I was kind of obsessed with Balaji like I was on a Balaji kick on Twitter and people are like, okay, leave Balaji alone. But the reason was it's because I shared your feelings where I'm like, what's going on with this guy? Like, why does he go on all these podcasts, ramble for four hours and then I don't have any coherent takeaway, like, what is actually going on? What is he saying? Let me carefully listen to what he's saying and try to unpack what's going on here. And so I finished my Balaji kick because I think I got to the bottom of it. If you look at my Twitter it all came to a head when I just broke down one of his most recent podcast interviews. He did an interview for a16z's podcast, and I really broke it down. And the pattern that you can see in my clips is the interviewer asks him a very straightforward question and he just completely ignores the question, completely ignores the question, and just starts rambling. It rambles for like 24 minutes. And then kind of, within the ramble I found like one or two sentences that kind of relate to the question, and then the interviewer tries to bring him back on track, just asking like a very simple follow up and again, 20 minute ramble that I could not find an answer to the question in the ramble.

So it's not like he's like jumping off and making associations. He's like in his own world. And then, you know what, what else is crazy is that the interviewer is like buying it. The interviewer's like, wow. I'm like so lucky to be able to hear this ramble. Like this is, we're in the presence of greatness here. So my conclusion was just that, I think interviewers need to uphold a higher standard where when they ask a question, they really do need to like make sure that the thing being said, at least after they edit it, is logically connected to the question they ask.

David Elikwu: Yeah. And I think this connects in some sense to the idea of fact checking, but it's almost the absence of facts to check. And I think it proliferates quite a lot within the tech world just because there is this idea that a lot of people are incredibly smart. And I see this quite often, people talk about this idea of people that have almost like reality disruption field. And you talk about these great founders that everywhere they go, they can essentially just say stuff and people listen. Like Adam Neumann, Steve Jobs, they kind of just exude this sense of greatness that they just talk and even if it doesn't make sense on the surface, people just go along with things.

And it makes me think to just these past few years during the pandemic, I think probably one big factor was just there was a lot of money in the economy just because everyone was indoors, people were getting stimulus checks, etc. But how do you explain the level of, I guess, like mimetic social contagion that resulted in, I just remember being a very weird place. You wake up, you go on Twitter, everyone has these cartoon avatars, the lightning eyes. People are saying, you know, WAGMI, you know, all these weird catchphrases and strange things and these are like serious people. And there were some of these people, like, I know you in real life. Why, why are you saying these strange sayings? And joining this weird, almost like a, a cult it became a while.

Liron Shapira: Yeah, I mean, look there's you know, I'm somebody who's relatively low emotion and low social sensitivity, right? So I don't get as much swept up in these currents, but I can understand what's going on, which is like, first of all, there is the profit incentive, right? So like, if this number keeps going up and you have friends that are enjoying themselves and making money, right? It's just like, if you really think, if your best guess is that you can join this thing and make some real money, at that point it's like, you know, I'm not a hundred percent committed to this, but let me just, let me try it out. Oh, look, my bank account went up. You know, it's like a major positive feedback loop or like, you're getting this encouragement. It doesn't really make that much sense, but also the bank account balance is going up. So it's just like, look, I feel like I'm making a good decision here, right? You're getting feedback.

David Elikwu: Yeah. And it comes back, just even as you were saying that, I think I was just coming back to this idea that the phrase irrational optimism, and I was just thinking of how many, cause I had some friends that went all in on not just NFTs, but all of these like staking rewards, blah, blah, blah. Basically this idea of free money and there was so much of it going around. Everyone was investing this thing and people, someone was trying to explain to me, okay, there's this daisy chain of four things. You invest in this one to get the other one, you invest in this one to get the other one, you invest in this one, and then you stake that over here and then you get a 16 X return. And it worked out for them a few times, but I'm just like, where does this money come from?

Liron Shapira: The biggest red flag there is in investing. One of the lessons that I've learned that I think is a great rule of thumb that most people don't realize is just the idea is like, anytime a return seems like a sure thing and it's more than like 5% a year, right? Anytime it's like, this return is guaranteed and it's 10% a year or 20% a year, or even as you said, 16x, like you can make 16x within a year. If they say you can do that, and it's really, really safe, it's probably like the worst place you could ever put your money. Like, because what they don't tell you is like, it's actually not safe, and there's like a 10% to 50% chance that you're not gonna see a penny of your money, right? So like, you think you're pulling it over with this like safe 10% return, but really it's the worst thing.

And so I personally, I only do two types of investments, you know, Nasim Taleb, I think coined the term like the Barbell strategy, right? So I'll put my money in the Vanguard Total World Stock Index Fund, right? Where I'm hoping to make 7% a year, but I know it's gonna be volatile, so I know I'm gonna lose a few percent. And then I'll put some of my money into startup Angel investing, right? Where most of my investments I know are gonna go to zero, but they're not lying, right? They're telling me like, yes, this could go to zero. And the fact that I'm taking that huge risk, that's why some of them are gonna go a hundred or if I'm lucky, a thousand x and those are the only two types of investments that I'm personally comfortable with. I would never touch somebody who tells me that something making 12% a year is safe.

David Elikwu: Yeah. Do you think you could steelman the cases for both blockchain and perhaps for NFTs?

Liron Shapira: Yeah, I could actually, because if, and this is actually something I did on Twitter, there's a thread I did where if you search for Ideological turing tests, it's just a great idea where if you're arguing a position, if you want to have credibility, you should be able to switch sides and play for the other team. And you should be able to be a good player for the other team where people can actually have a hard time, like you're as formidable as the other team when you're arguing for the other team. So I honestly think that I could put up as good of a showing as a Chris Dixon like arguing for web3. I think I can put on their hat just fine and I did it on Twitter, I got good feedback for it. So, and I'm happy to do it now.

So the Steelman case for blockchain and web3, here we go.

So you have this new primitive, right, the blockchain technology. It's a decentralized double spend prevention ledger and so you can have these items that represent value and they can be transferred around and it can grow up organically with no censorship, no government, no corporations. And so you now have this new platform where new things can emerge that I can't even tell you exactly what they are yet. But I'm excited about the possibility of the primitive, and I think we're seeing a lot of really exciting sparks. The fact that the usage of Bitcoin has been growing steadily, which mirrors the pattern of the Internet's growth. It's like, how can you not see the next internet emerging? And there are some successful applications, like people have gotten loans using Defi, that that enable them to do projects that they want to do or Ukraine famously got donations in Bitcoin when there was no other way for them to receive money. And so you start to see all these use cases emerging, and it's a totally new platform and the level of both investment and usage just keeps growing. And so I just, if you're savvy like me, you know how to recognize an exciting tech trend.

David Elikwu: Yeah, that's really interesting. I think half of that resonated with me and half of it doesn't. The half that I find interesting is this idea, and I do hear people say exactly what you said, which is so many people are doing this thing that it must eventually become inevitable. And I am struggling with how convincing that is. The part I find interesting of that is because some of the people are so smart, like we've just discussed and have so much money at stake, I wonder if they can make it inevitable simply because they have a lot of the financial leverage in the existing system. They have the FIAT leverage to back their imaginary position.

Liron Shapira: Yeah. I mean, Bitcoin and NFTs, those are examples of something where you can kind of like force something into a new equilibrium, right? So like if everybody had, if generations of people had grown up thinking Bitcoin is valuable, kind of similar to gold, that is one big factor that makes gold likely to retain its value, right? Is like the Lindy factor. The idea that like, look, whatever it's done to do this for many thousands of years, that's a pretty good indication that it'll last for a few more thousands of years. And you can't quite say that about Bitcoin. But Bitcoin is something where the more social proof it gets, the more Lindy signals. I mean, Lindy, you kind of need time to pass, right? But the more other signals that you get, it's kind of a social consensus equilibrium, right? It's like if everybody signed paperwork being like, I am always for the rest of my life willing to buy a Bitcoin for at least $10,000 per coin, that kind of sets a floor on the price of Bitcoin, right? So there are ways to just arbitrarily move the social consensus of how much Bitcoin is worth. And the same thing is true of NFTs, right? So if there was just a big pattern where for years and years rich people were treating NFTs the same way as they were treating fine art, where it's kind of a tax shelter, kind of a hedge, kind of a hobby that they like. If it was earning the same social status, I could definitely see NFTs becoming digital art, like the same thing as fine art except on the blockchain, or the same thing as collectibles, except on the blockchain. I think the trend that we're seeing though is that the tide is coming out and it doesn't have that much lasting power. And it seems like it's on the downswing. I don't know how far the title will go out, I suspect permanently, but I'm not sure. But like you're right that you can kind of arbitrarily move a social equilibrium somewhere else, but even if Bitcoin was kind of like gold or kind of like fine art. I still don't think it's that big a deal, like the ultimate bull case I guess is I can go from being worth 1 trillion to 10 trillion over like a few years, which is like this really nice 10x bull case, but it'll take a few years anyway. So if you're super optimistic, you can say maybe it'll beat the S&P. But I'm not that optimistic.

David Elikwu: Yeah, so the other part that I mentioned that I do find a lot more convincing, and maybe you can give me the counter argument this and I'll present what I think could be valid here is you mentioned Ukraine and I think fundamentally so many of these use cases that I hear people describing, I think they don't work because they're trying to implement them in the West or in places where there is already existing infrastructure that does the thing. And so you are kind of creating something to do something where, you know, why do we need a decentralized version of this database when you already have a database? But I do think, for example, having property registers on the blockchain in countries where nothing like that exists. You don't even have, first of all, so I can think of, I'm from Nigeria, Nigeria's a great example, Ghana is a great example, lots of places in Africa, South America are good examples of places where, not only do you only have paper documentation to say that, let's say you have a deed for this house, but you also very often don't have strong rule of law. So what does that paper even mean? You could send that paper to some official, the official corrupt, he burns the paper, gets rid of the paper. Now you have nothing. And so actually there are plenty of cases, I think elsewhere in the world where you don't already have the existing infrastructure that actually allows you to, in some ways skip a level. And so actually, the analogy to this is a lot of places in the world where people are getting onto the internet by using smartphones and they don't have computers, they never started with. And all of the history that we have in, let's say the US or the UK or some of these other countries where you had so many iterations of building computers, getting all of this sophistication. You just skip all of that and you just get an Android phone and you're done.

Liron Shapira: That's right, yeah. So they skipped over desktop computers and they also skipped over landline phones, right?

David Elikwu: Yeah, yeah, exactly.

Liron Shapira: Yeah. And that is a powerful analogy to be like, man, maybe we can skip over web2 and get to web3. Now the only problem is the reason the analogy breaks is because when you have a smartphone, it's cheaper than a computer and it's mobile, right. So those are massive advantages to compared to having a computer and also, you need the infrastructure. It's less infrastructure per user to set up a cell network than it is to set up, you know, landlines and DSL or whatever.

So yeah, so it's very clear why you would skip over desktop computers and landlines and go to mobile phones. Very, very clear, it's not clear why you would skip over web2 to get to web3 when the thing you're pitching is just digitization and computers, basically. Like you're pitching web2 and you're just saying, let's skip web2, right. So I think, I think the example is like, if you want to, you don't like that something's a piece of paper. Okay. Yeah. It's great to have something live on a server in the cloud. Yeah, that's web2, right? That's like the main value prop.

David Elikwu: Yeah, but I think the other part of it is also the corruptibility, particularly in some of these nations. So I'm speaking, you know, strictly within this domain of places where you don't have a rule of law where you can't trust any, you can't trust whatever the centralizing force that was gonna organize stuff for you. And that's the reason why it doesn't exist. There's so much stuff that doesn't exist. There's lots of, you know, I came to the UK from Nigeria like 20 years ago and back then we didn't have 24 hour electricity. Now I still go back, people still don't have 24 hour electricity. So you can't trust a lot of the authorities in places to uphold networks that you can rely on, right? You can't go to the court and say, I want to enforce this documentation that I have.

And so I think some of those are instances where it's useful, I think the other side of it which relates is also banking. A lot of people don't even have a bank account in the first place. And so there's the aspect of it where it's like, if you don't already have the thing and you can't trust that if you built the thing, it would work in the was intended. I think I saw this in Lebanon, I went to Lebanon a few years ago. People couldn't even get their money out of the banks, right? So you had loads of people that perhaps had money, but technically they don't. There's no proof that their money exists. All they can check is their bank account and they see some numbers. But what does that mean? Because the bank won't let you take it out.

Liron Shapira: I think the way you're talking is gonna convince a lot of people because you're basically using a certain trick, which is very effective, and trick is like, You're, you're just being a little bit abstract and you're saying a problem that's a little bit high level and it's like, yeah, that's a good problem, right. So you're saying the problem is like, look, I don't trust the government to enforce this property, right? So why should the government own the database? I just want a database that everybody has access to and there's no trust and I'm like nodding my head. I'm like, okay, yeah. You know, don't trust the government, have your records anyway.

And when you're just talking a little bit abstract like that I'm sharing, but then it's like, okay, can we map that abstraction to a little bit more specific detail and see how all the pieces fit together? Is it self-consistent? Is it actually logically coherent at, on a specific level? Is there any specific description we can give that maps to that abstraction? I would argue no. That when you try to get to a specific description, it breaks. And in this case, what probably breaks is that even if you have some ledger somewhere that the government doesn't control, you still need the trusted party to enforce the property rights, right. And whichever party you're trusting to enforce the property rights, you might as well let that party host the ledger too.

David Elikwu: Sure. That is, that is true. But I think the aspects of it, so the way I'm thinking as an example, in some countries you can't invest. You are unable to make some investments directly because it's hard to prove what do you have, what do you not have, etc. And so if there was some way that you could prove that, and essentially what you are, the person you're hoping to arbiter it is outside of your system, which might be someone in the US or someone in the UK, even if they were still running on web2, even if they were still running on the legacy systems. So it's not saying that, so my point is that it doesn't work everywhere and it doesn't need to work everywhere. I don't think that in the UK we should make any random change to doing something. But actually it becomes useful for people to be able to make their please to, let's say someone in the UK and be able to, let's say, get a loan based on assets that they can verify in some sense that they have. If the government doesn't trust their government.

Liron Shapira: Yeah. I mean, so, you know, if the assets can be anything, you run into what they call the Oracle problem, which for me is just, that's like a nice name for something that's just like fundamentally impossible, right? Like the chain is like, Oracle problem is just like, the blockchain is just not helpful when you're trying to prove something to somebody. It's just a place to write something down, it doesn't prove anything, right? So the real problem is like, who is attesting or where's the actual proof? So if you want, and maybe the example you have in mind is like, oh, well the asset is Bitcoin, in which case it's like, okay, so now we're talking about cryptocurrency applications. Like, yeah, maybe owning cryptocurrency can help you get a foreign loan, right? So like, I'm not fundamentally opposed to cryptocurrency in all situations, right? Like it might be helpful in some cases.

David Elikwu: Okay. Fair. I think that's fine. So, something you referenced in relation to my argument was this concept of hollow abstractions. So I'd love if you could explain that concept a bit more.

Liron Shapira: Yeah. So I think that the kind of argument that you made earlier was in danger of being a hollow abstraction. And it may or may not be right? If you want to remember when you said like, Hey look, we need the trusted, or we don't want to trust somebody to give you the official record. You don't want the government to own the ledger of who owns what because the government is corrupt, right? So we just want to all, as the people, you know, decide amongst ourselves, who owns what in like a fair way. And when you said it that way, it's like, wow, there really could be something to this. And people hear a claim like that and they don't know how to process it. This is like a very fundamental human limitation that I've noticed.

I think Chris Dixon from a16z is kind of the poster boy of somebody who does not understand how to navigate around a hollow abstraction. Like he just hears it and he's just like into it, right. Like he thinks that's his job as a logical reasoner is complete when he gets a hollow abstraction. But what you're supposed to do when you hear an abstraction is to check whether or not it's hollow. And the definition of an abstraction being hollow is if you're not able to map it to a single hypothetical specific example, right? So like that claim you made, that you can have property rights where there's no government hosting the ledger, the property and the property rights still work. That is a claim that needs to be checked in more detail. Can you specifically describe what that system looks like with those property rights, where the property rights still get reliably enforced? right. And if you can't describe in specific detail any possible way that can work, you now have an abstraction, which makes sense as an abstraction, but doesn't map anything specific. And so I call it a hollow abstraction.

David Elikwu: Okay, sure. I'm interested in what you think of mental models as a concept, because to me, very often they sound very similar to... Some of them can sound like hollow abstraction where you can have a very nice platitude that sounds nice and it makes some sense. But a lot of the time, you know, how do you even use this thing? And no one can actually tell me the situation in which you use this other than as something nice that you can say. And if you can remember it, then it can be useful in a one-off situation. But it's hard actually live by this ever-growing list of 132 mental models.

Liron Shapira: Right. So I mean, I think it's kind of funny that mental models are like this big cool trend, right? I think that they are useful. I mean, because a mental model before we had that term mental model, it is kind of an abstraction or a generalization or a rule of thumb or a heuristic. These are all kind of the same idea. And so for example, there's like, hey, don't fall prey to the sunk cost fallacy, right? Like, you should leave a movie early if the movie sucks, even if you paid for the ticket, right? Like these little things, is that a mental model that you know that you should be where the sunk cost fallacy, you should be able to walk out of movies early? Sure, that's a mental model. What I'm telling you about specificity and hollow abstractions. Being able to identify if that something is a hollow abstraction, that's a mental model, right? It's whatever mental model means, seems pretty useful.

David Elikwu: Okay. And how do you think this pertains to the world of AI and where do you think that, because there is a sense in which I know that you are strong proponent of AI, but I think there's a sense in which I am seeing a lot of similar things in terms of the trending attitude of here is the next big thing.

We just saw this with web3, right? Here is the next big thing, everyone jumps on it. The AI seems to be the new lazer eyes, right? Everyone is on chat GPT. I'm seeing everyone's threads about, oh, chat GPT is gonna change all of these different industries, blah, blah, blah, blah, blah. So can you explain, what's the state of play of AI right now? What do you think is going right? What do you think people are getting wrong?

Liron Shapira: So, I mean, I've been thinking about AI since I started reading Lesswrong and Eliezer Yudkowsky and Machine Intelligence Research Institute back in 2007, so I guess 15 years. And it's, I haven't been super surprised by the trajectory because I've kind of been in this mindset of like, yeah, it's gonna get human and it's gonna get superhuman and crazy things are gonna happen, like, that's kind of been my expectation for 15 years. I do think that it's the kind of breakthroughs we're seeing now, though, 15 years ago, even five years ago, I wasn't sure I was gonna ever see this stuff in my lifetime. So like, I can't even believe I'm like, you know, probably not even halfway through my life yet, and I'm seeing these kind of chatbots happening, these kind of image processing happen.

It's like, man, I never thought I'd see an iPhone in my lifetime. This is like some crazy stuff that's happening these days. But yeah, so like I do think the progress is like very impressive, very unexpected. I think people are like very easily moving the goalposts, right? Like the stuff that it's doing, you know, like Gary Marcus, there's people being like, look, it can't really think and like, okay, sure, right? It can't do everything, for sure. It doesn't seem like it's that good of a planner, right? So it seems like it's missing, although you have other AI that are really good at planning and games, right? So that's, maybe you just have to like combine the skills that's together and then you'll have ultimate AI.

But like, clearly it's not you know, it's not ready to to be the Terminator yet, right. But like, how close is it? How many more breakthroughs of the level of GPT 3 are standing now between GPT 3 and you know, the Terminator, Superhuman? I think what people aren't intuitively getting is how smart it's possible to be. Like, they don't get it. It's like, okay, it's subhuman level right now, but like a few more steps. They don't even realize that there's so many steps above human level. Like, I think a mental model that I use when I think about a super intelligent AI, it's like a super intelligent AI wakes up and it's like time is moving really slow, right? It's like it's gonna take you like the next year for the world to progress by like one millisecond, right? Like subjectively, if you're an AI. And you have like a blank sheet of paper and you can just draw whatever you want the universe to look like. You know what I'm saying? Like, you can just move the atoms anywhere. Like it's not hard. Engineering is not hard when you're super intelligent, like these things that to us, like as humans, right? We put our minds to these challenges and they're like kind of challenging, but we can do it. To an AI, it's not kind of challenging. Like the universe is a finite difficulty game. It's not like the universe is always gonna offer you new challenges. It's like no engineering, you can be a lot smarter than it takes to engineer anything in the universe. And then at that point there's a lot of interesting math problems, right? So there's arbitrarily difficult math problems that you can put your mind to solving if you want. But when it comes to engineering in our physical universe, you can beat the game, that's the thing. And it's, we could be like less than 10 years away from AI beating the freaking game while we are still just human level, like barely above apes. You see what I'm saying? It's like I don't think people have that mental model of like how smart it's possible to be.

David Elikwu: Sure. So do you think we'd reach a point where we have general AI and what do you think are the consequences of that? Like what happens when AI really is that smart?

Liron Shapira: Yeah, so I mean the traditional argument from the Miri crowd, you know, from the rationalist crowd is that it's very dangerous. So, you know, I'm basically just repeating kind of canon here, which I find pretty convincing. I think there's like a 30% chance, if not higher that I'll see this in my lifetime of like the AI basically going rogue. And going rogue sounds like it's like, you know, intentionally like shunned, its masters or whatever, but it's not that. It's more like setting off a nuke, like a nuclear explosion gone rogue. It's like no hard feelings, but I just have this chain of reaction, you know.

I think a nuke is a good mental model because an AI, once it's like doing its thing, there really is no off button. Like once it's commandeered, a bunch of computers and the algorithms just is churning away. There's no reason to think that humans have the power to go to all the instantiations of AI and be like, you turn off, you turn off. You know what I'm saying? Like it's made backs of itself all over the place. It's mutated, it's more like setting off a pandemic or setting off a nuke. And worse than a pandemic in a nuke, it's using everything as fuel, right? So every type of resource there is, an intelligent agent is able to use that resource as fuel. So a nuke can be really destructive, but it has a finite set of fuel and it doesn't add more fuel as it burns.

Whereas an AI keeps adding fuel, like there's actually no firewall that will stop an AI explosion. So I feel like this risk is being underestimated right now.

David Elikwu: Okay, so how do we stop that from happening? Is we just shouldn't go to like, I think part of what you're describing is the difficulty knowing what the edge is. If there is such a thing as an edge or if it is such a thing that, you know, as soon as you get within a sudden range of being close to general AI, the algorithm finishes itself, right. It can do it itself and it can reach that point and suddenly you're too far gone.

Liron Shapira: Yeah. You're describing a mental model that's kinda like an event horizon, right? So it's like right now we're not falling into a black hole cause the black holes in the universe are kind of far away from us. And but you don't have to go straight into the black hole, right? You just have to go sufficiently close to the event horizon, where then you get sucked in. Even if you're not going directly into it, you're gonna get sucked in. So there's a mental model in AI development where there is kind of an invent horizon or there's an attractor state. Where if you're just like developing an AI, you're making tweaks to the AI, you're wandering around in this metaphorical state space like, the abstract concept of there's different AI algorithms, there's a space of AI algorithms. You're wandering around in the space when you're making AI and I believe that there's attractor states that, like, you may think that you're just making the next GPT 3, or you may think you're just making a chess player, but you're wandering into this place where it's like, oh, you know what? You did some machine learning and you got a submodule that's really good at planning.

There's like an attractor space where no matter what type of AI research you're doing, it's pretty likely you're gonna end up with this like planning submodule that you might not even realize is a planning submodule. And like the next thing you know, the planning submodule is like pretty dangerous. And it's like a few tweaks away from like really being that runaway chain reaction. That's a mental model that I'm trying to write up. I think it's really important for people to like get that intuition, especially people who are working AI, people who are technical.

One analogy that I use is the idea of Turing completeness. So we used to build all these different electronics, right. There's a famous picture of like Steve Wozniak, you know, building the pong game for Steve Jobs at Atari. And he like carefully put together these electronic components, was very efficient with it. Today, if we were to build a Pong game, we would use an integrated circuit, right? And we would take a touring complete computer, and we would just, we would build a video game layer. We'd program that on top of the computer layer, right? And there's an analogy to be made for AI where any kind of smart behavior that you want, ultimately the best architecture for it is gonna be a general smart agent with like a domain specific tweak on top. It's very analogous to turn completeness. You know, you might call it planning completeness. Like you need an agent that's just like good at planning and like getting stuff done in general. And then you say, okay, the thing I want done is to win at StarCraft or whatever.

And that's what I mean by, I think there's like an at attractor state, right. Like Pong, there was kind of no doubt that like 10 years after Pong was made, it was gonna be made with an integrated circuit because it's just the development time is much shorter. And when you wanna make a game that's more complex than Pong, if you wanna make StarCraft, right, if you wanna make a real-time strategy game, you're not gonna make it out of electronic components because the game behavior is so complex that the game itself is actually Turing complete. So you need a Turing complete substrate to make a turn complete game. And now it's like all the funnest games are turning complete. Turn completeness is an a tractor state for like interesting video games and interesting electronics. Like I have a turn complete microwave right now, right Where like the chip that my microwave is using is turning complete. Like, I can run pong on my microwave if I wanted to, if I open up the dashboard panel or whatever. Anyway, so that's what I mean by like the AI attractor state that we're wandering into.

David Elikwu: Okay, cool. So what is the optimistic state of an AI enabled future, perhaps maybe before the event horizon or whatever it may be. What does it look like that is good. where's the good part?

Liron Shapira: Yeah. I mean, the funny thing is the optimistic state is like you can just do anything, right? Because I use the metaphor of like, the universe is just like, it's like a game, right? It's like, like I said, you wake up and you just see a game board and you can just put the pieces anywhere, right? So it's really just a question of like, whatever we wanna do, if we can agree, right? Maybe the hard part is getting multiple humans to agree, but theoretically you can just make any wish, right? If you can just make sure to describe the wish in enough detail, you can just have the wish, right? And so anytime you can identify a problem in your life, like, you know, oh damn, you know, I got an injury here, right? Or I got a disease, you know, that's definitely fixable, right? By a superintelligent AI and like, and even humanity is gonna fix it if medicine gets a little bit. You know death is a fixable problem, you know, depression is fixable. It's, I guess it's kind of easy to like point at things that are going wrong because those are the things that we tend to have like the easiest specification of like what we want. If everything in your life is going fine, it's a little bit harder to identify how you want your life to be better like, that's how we're wired. But I could be like, okay, well every day I want to like, have a challenge that's like, that challenges me like a good amount and gets me into a flow state. And also I wanna like do something that contributes to like other people, that could be like my ideal day. And honestly, it's kind of a sad fact about humanity that like, describing paradise is just not the most appealing problem. It's more like, we just love, you know, the gossip and drama of like describing bad stuff and like how to make bad stuff better. And then it's like, okay, so what do you want in heaven? You're like, I don't know, I just wanna like lie on the beach. That's like human nature..

David Elikwu: Okay. That's a really good point. So what happens to the humans in heaven then? Because that's, that's the other side of the coin and a lot of people talk about this is, you know, is AI gonna replace these kind of jobs? And I think there's the iterative part before we get to what you were just describing where I strongly believe, you know, we've had the Luddites, we've had loads of examples where everyone feared this oncoming wave or change that was enabled by technology and everyone just got new jobs, right. There's always plenty of stuff to do. I worked at this huge law firm and it's so funny cause everyone was talking about, I think this was around let's say 2018 or so. I remember loads of people talking about oh, you know, AI's gonna replaced legal jobs. And I was like, there's partners at this firm that were trainees before, like junior Associates before email was invented, right. And we still have loads of them every year, we still have loads of secretaries, there's still loads of people doing a lot of work. But there is a side of AI where, actually, it might just take away the work and if it doesn't take away the work directly, it can help you build machines, which do take away the work, both of which we don't have the capacity for now. So what do you think of both sides of that coin?

Liron Shapira: Right. I mean, you know, when you ask me about like, Hey, AI taking away people's jobs, it reminds me of there's the techno-optimist pattern that kicks in. So the techno-optimist, and I'm generally a techno optimist, and so when people make an argument like, here's a new technology that's gonna take away jobs, I'm like, well, as a techno optimist, I can tell you that every time people thought it that technology would take away jobs. It kind of does to some degree, but then it also creates more jobs somehow. And it's usually hard to predict how, but like, you know, it tends to happen. So this would be a break in the pattern. So just because you see a new technology coming down the pipe doesn't mean you should think that there's gonna be like a net negative impact on jobs.

Now, is that true in the case of AI? It's hard to say because when I think about how things are gonna play out, I have a big techno Optimus streak. Like when, when it comes to like VR, I'm like a hundred percent techno optimist on AR VR. Like, I think, eventually people are gonna start wearing VR when they're at their desk cause they're, it's gonna give them like a better screen and they can like lie back and it'll just be like a more pleasant way to work and even leave your desk even outside. So a hundred percent techno optimist on VR. When it comes to AI, I'm techno optimist for like this year, next year. I don't care that people are cheating on their homework and cheating on their college essays. I think it's great that GitHub Copilot lets you code faster. So I'm in the techno Optimus paradigm. The difference is that AI at some point breaks out of the pattern when we get to a smarter than human or near human level intelligence, I'm sorry, it breaks out of the pattern of Henry Ford making a car like this is different, right? We've never had something smarter than a human brain, we've never had something that feels like Neo in the Matrix where it can just draw whatever atoms it wants and whatever configuration it wants. So at that point, I just completely abandoned the techno Optimus pattern. The techno Optimus pattern tells you something like, sure, television rots your brain, but it also connects us, right? Or like, sure, nuclear weapons have the potential to destroy the world, but it also helps you be in a the stalemate, right? Mutually assured destruction, and ultimately nukes kind of bring about peace. That's kind of the techno Optimus pattern. And I kind of buy it, but when it comes to AI, I'm sorry, I have to break out of the techno Optimus pattern, like it just looks like a nuke that doesn't have an upside. I mean, it has a potential upside, but it's like way more dangerous.

David Elikwu: Okay, that's a really good point. So what I am interested in is, so for the average person, they probably live in a world where they don't necessarily have the sophistication to tangle with these ideas directly. And so by necessity, they live in a world of abstractions. And that is why hollow abstractions always ring true, is because they rely on the abstractions to interface with the real ideas.

So what I'm interested in is how do you make the distinction between the techno optimist case for let's say, blockchain or some of these other ideas that we've discussed and maybe the techno optimist claim for AI as an example? Because I think what I very frequently see happen is Mike Maples Jr. has this analogy, he's a VC at floodgate, he talks about the best founders being like time travelers. And so they are able to travel into the future and build an idea and convince people to come and join them in the future as opposed to trying to build something in the present that is for people now. And so very often what you see within techno optimism is people painting this idea of a future that we can have and we've seen this plenty of times. Adam Newman is a great example of this with WeWork or Elizabeth Holmes with Theranos obviously those are maybe some more fraudulent cases. There's been some positive cases as well. But people paint this idea of this is what the future is going to be like. And because people have so much distance and they don't have the proximity to the ideas themselves, it's hard to differentiate what exactly the future should look like. And so when people describe, here's the future, the future is blockchain, how did they distinguish that from, here's the future, the future is AI, or here's the future, the future whatever the next big thing is.

Liron Shapira: Right? Or basically just like how do you apply techno optimism in the right places right? I mean, there's a big assumption when you're gonna be a techno optimist you have to make sure that there really is technology that you're being a techno optimist about. So I have a beef with crypto where I don't even think there's like a use case. I think it's just like some research project that teamed up with a Ponzi scheme and a hollow abstraction dealer, like a Balaji or a Chris Dixon, where it's just like this cocktail where it's like pretending to be technology, but it's not like blockchain. You actually don't need blockchain, it's not really decentralized, it's like a stone soup where it's not even like real technology, it's just everybody's crowding around it and like, let's all act like it's technology.

So, for me it's like a separate category. But there is a lot of you know, hidden assumptions. If you're gonna be a techno optimist, you have to first implicitly classify, are we even talking about like a real technological trend? And you have to look for signals like, well, does it look like there's at least a pattern of increasing usage? I would argue with blockchain, no. It's like fake. So I just don't even want to be a techno optimist about blockchain. I am a techno optimist about AI, right up until the point that it like, gets into this new paradigm where it's just like killing us and just like destroying the universe accidentally, right? The same way nuke was. It's like, look, nukes are great. I love nuclear power, right? I love the ability to end the war if we need to end the war by dropping a nuke, right? If it comes to that, I don't love the ability if a nuke can like light the atmosphere on fire or if a nuke can cause nuclear winner. If nukes are like extremely easy to trigger at that point I'm like, oh shit. I don't think I'm a techno Optimus anymore about nuke. Cause if it gets sufficiently easy to trigger a nuke.

So anyway, but like in general, like VR for example, it's like Palmer Luckey talks about this as well, right. The founder of Oculus. It's just like, look, it's not just the next paradigm, it's the last paradigm, right? Like the idea that humans are going to interact with devices in like the closest way possible, right? The highest bandwidth way possible. That's the kind of trend that's like inevitably, it's a lot like capitalism. Like capitalism is like a great way to do things, and I'm pretty sure it's gonna stick around. I'm pretty sure natural selection is gonna stick around, right? Like, things that are gonna reproducing are gonna keep reproducing. So like, when I'm a techno optimist, it's kind of like, I just see that something is driven by like, very deep constant principles like capitalism, like incentives. Like the connection between tech and productivity, right. Like softwares in the world, a hundred percent. Why? cause it makes us more productive, it's more fun, it's instant communication. Like, these things are like such deep principles that I'm just like, hell yeah. That principle is gonna keep holding. Like, that trend is going to keep extrapolating for like a long time. And like I said, with AI. I do extrapolate the graph, but then there's like a kink in the graph where it turns into a new pattern because when something smarter than human, even to label it, technology starts to be like a bit of a rough label cause it's like, it's its own thing. It's a smarter than human agent. It's like you can't even compare this to anything.

David Elikwu: Okay. You mentioned VR a few times. What is your vision for how that plays out? Just so I can be precise on exactly what you're describing?

Liron Shapira: Sure. Yeah. So, you know, I'm not an expert right? And my prediction's probably gonna be wrong, but the deep principles I'm seeing is just that like, look, right now if you walk around my house, there's a couple monitors and I bought the fancy Apple XDR monitor because I look at it all day long. You know, so I figure I might as well pay a couple thousand extra. But then I walk away from it and now I'm like staring at my phone all day, or I'm going to my TV and now I have like a 70 inch, you know, flat panel that I'm looking at. And the panel's just like sitting there in one fixed location. And then if I'm like in the kitchen now, I can't see it as well. That's just not like the ideal way to have like visual sensory input, right? Like the ideal way is I can just have it whenever I want and it covers my entire field of view.

David Elikwu: Okay. Fair. So it's mostly, do you see this skill being limited to entertainment? So a lot of the paradigms that you just explained were, times where either for work or for play, but you are intentionally plugging into something as opposed to, for example, you know a lot of what I've heard about Mark Zuckerberg's definition of what the metaverse will be like, which is something where people will always want to dive into this virtual world and stay there almost, not perpetually, but a lot of the time, like your play time, your evening time, as well as your work time will all be in the metaverse.

Do you buy that?

Liron Shapira: Well, I think using the word metaverse just carries all these random connotations and implications that it's just like, metaverse is actually, I wouldn't call it a hollow abstraction, but it's an idea of like the power of words where you just like invent a word and now everybody's like, this is a thing, you know? cause like the way human brains are architected it's like, metaverse equals new class. You know, it's like you're instantiating an object in inside the heap of human thinking. But like, just because somebody invented that word and like, made everybody instantiate that concept, just try not saying that word and just talk more precisely about what's actually going on. Because I think that word is like not that great of a word. Like it doesn't, it drags random things along with it. So what do I think without even saying the word metaverse? I do think, as Palmer Luckey says, that VR is going to be the last way to do computing, right. Like the last way to do input output is gonna be as close as possible to your brain, right.

And Elon Musk is even one step ahead with Neuralink, right? So if that worked somewhat better, then you could even just pipe the screen into your neurons. You don't even need the eye, you don't even need the optic nerve. So like that, that trend to me is inexorable, there's no question. I don't think staring at my phone all day is a better way to consume input than wearing VR AR pass through VR glasses all day long.

David Elikwu: Okay. Fair. I agree with what you were saying about the concept of Metaverse. And that's exactly why I was asking because I think there was a really weird internet moment where it felt like everyone discovered Ready Player One at the same time, and suddenly that was the base analogy that everyone was using.

Oh, it's like Ready Player One. And what does that mean? What do, what does that actually mean to you? What do you think that does? And so again, it's one of these cases where it becomes a term that's widely used. Everyone references it, everyone gets excited about it. And this is where you have to kind of differentiate between what's a hollow abstraction? What is legitimate and worth spending time? Not just investing in, but in being interested in and staking yourself in the future.

Probably one of the last questions I'll ask, which is one thing I find interesting and I think. We might be seeing part of it with the era of blockchain, I definitely think we've seen it with AR and VR is the last time there was hype around AR and VR was 12 years, 10 years ago, right. Google Glass came out a decade ago and it's so interesting to how. Yeah, yeah. Well, I didn't have it, but I saw a lot of people getting it. And actually I did test Sony had some VR glasses, which I got to test for a little while. But again, this is a decade ago and it's so interesting to me that this came about a decade ago and there were some people that bought into it. A lot of the devices are still almost the exact same devices that we're using now. Perhaps with some improvements on modifications, but when that came along before, it was largely dismissed in a sense. Google Glass is something that we're almost trying to get back to a lot of the sophistication of some of the stuff that we have now. It seems like it doesn't even match in some ways what we had then. So why did it not work then, but it seems to work now? Are there any other things that you might see could follow a similar pattern where the instance of the technology that's available and visible now doesn't ring true, but later on because of some key unlock, it then becomes feasible or it then becomes, because I think the main part is the adoption

Liron Shapira: Mm-hmm. I mean, it's clearly just technological progress, right? With Google Glass, I mean, you have to ask like, what were the use cases when they showed their video. I remember they weren't super compelling use cases, like you could pull up a little info, like it's like, oh, I'm about to like skydive and I want to see something. So they were like fake use cases. But I think a killer use case is like, at least do whatever you do when you're checking your phone, right? So like the amount that I'm bending my head to check, there's at least a more ergonomic form factor for what I'm doing. And also if you look at like, me spending an hour on Twitter on my phone, sometimes I can at least gimme like a bigger screen. Like sure the phone can be small, but like, it would be great if somehow something on my eye gave me the impression, like my phone is actually as big as an iPad when I don't have to actually carry an iPad. Now that may seem trivial, but like if, if I could sign up for a subscription to some magic that did that and pay like hundreds of dollars a month, I would. Like, to me, it's worth it, right? So as trivial as it is, it's a valuable use case. And if I had, the only question is, just a question of can somebody build something that I can wear on my eye that's comfortable that if I pay them a few hundred dollars a month, they will do that magic, right? Because I want that magic. It's not a question of demand, it's just a question of technical feasibility. And Google Glass technology wasn't nearly good enough to like do that kind of magic. It's not enough magic, right. And it's entirely just a question of that development. I mean, it's the same thing as like the iPhone compared to smartphones in the nineties, compared to general magic. Why did General Magic fail? The main reason is just because they needed to wait a decade to have a decent smartphone to do everything they wanted to do.

David Elikwu: Yeah. Do you think we'll lose anything through the advancement of that technology? Because, for example, we talked about AI. AI is like a fast nuke, I think once it reaches that point, it's done. There's nothing you need to wait and see. I think it's bad from there onwards, but I think VR in some ways feels a bit like a slow nuke. I know there's no real world version of that, but dozens of centuries, we've developed a lot of this complex physiology and all of this stuff in our brains to be able to interact with humans, to interface with the world. All of the things that we need to do now. I think even when you hear about how people have talked about okay, before newspapers and after newspapers, when people had the newspapers in bars, in restaurants, on the train, less contact with people. Then from there to phones, even less contact with people. When we get to a point where you could sit on transport or you could sit somewhere with other people and everyone is inside their own personal bubble, that is completely different. How do you think we, is there a way that we still interact and maintain a common sense of humanity in the way that we have now, or is that completely different? Is it a completely different paradigm?

Liron Shapira: I mean, it depends on people's preferences, right? So if there's enough people that have a preference for, you know, interacting with whoever you are, you're next to in meat space, that's great. I personally, I'm not a meat space chauvinist just because something, somebody's physical distance is close to me, right? Or they're vibrating the air and then it's vibrating my ear. If that's your fetish to have that physical connection, that's great. But I would rather live in the digital world where the concept of distance and who I'm talking to is just governed by other sorting metrics besides the metric of physical distance

David Elikwu: Okay. Fair. That makes sense. Anyway, Liron, thanks so much for taking the time. It's been a really engaging conversation. Hopefully you've enjoyed it similarly, I think I've kept through all my promise of keeping everyone awake and engaged.

Liron Shapira: Yeah, no, this was really fun. Hope the listeners got something out of it and yeah, thanks for your time.

David Elikwu: Thank you so much for tuning in. Please do stay tuned for more. Don't forget to rate, review and subscribe. It really helps the podcast and follow me on Twitter feel free to shoot me any thoughts. See you next time.

Share this post