Placeholder Image

Subtitles section Play video

  • Good afternoon, everybody. I'm Andrew Ross Sorkin. It is a privilege to have with me

  • Peter Thiel this afternoon, one of the great legendary investors in Silicon Valley. He has been involved in just about everything that you touch and feel, including being the co-founder of PayPal, the co-founder of Palantir. He made his first outside investment, made the first outside investment, I should say, in Facebook. His firm Founders Fund is a big backer of Stripe and SpaceX. His firm backed numerous other startups through the Founders

  • Fund and Thiel Capital. He also started the Thiel Fellowship, a two-year program that's an alternative to a college degree, which I want to get to at one point. And more importantly than all of it, he has touched some of the people and found the people who you read about in the headlines every day, from Mark Zuckerberg to Elon Musk to Sam Altman and so many others.

  • And it is great to have you here.

  • Thanks for having me.

  • We're also going to talk a little politics as well, along with maybe some of the issues and culture conversations that are happening in Silicon Valley. But here's where I want to start the conversation, because I want to start the conversation talking about people, because I think there's something actually extraordinary when you think about your track record over the years of involving yourself in investing, not just in companies, but ultimately in people. You wrote a book, which is coming on a 10-year anniversary. And by the way,

  • I reread it, and it stands up in a very big way. It is called Zero to One. And you wrote the following about founders, the idea of founders. You wrote that the lesson for business is that we need founders. If anything, we should be more tolerant of founders who seem strange or extreme. We need unusual individuals to lead companies beyond mere incrementalism.

  • And I mention that because I also just mentioned a number of individuals which we read about all the time. And some of those people would be described as unusual, perhaps, or even strange. And I'm curious about how you think over the years you have found these individuals, what it is that has made these individuals as successful as they have become.

  • Yes, it's obviously, if there was some simple magic formula, this is what a founder looks like, and you invest in this category of people who's a founder, it probably gets faked. It's like, I don't know, it's a 20-year-old with a T-shirt and jeans, or something like this, or you end up with all kinds of really fake ideas. But yeah, I think a lot of the great companies that have been built over the last two decades were, somehow, they were founded by people where it was somehow deeply connected to their identity, their life's life project.

  • They had some kind of idiosyncratic, somewhat different vision of what they were doing.

  • They did something new, and then they built something extraordinarily big over the years.

  • And of course, they have these sort of extreme personalities, often have a lot of blind spots, and there sort of are all these ways in which it's a feature, and there are ways in which it can be a little bit buggy. But it's sort of a package deal, and I net out to it being massively advantageous versus, let's say, a professional CEO being brought in.

  • The prehistory of this, I would say, would be in the 1990s. The Silicon Valley formula was you had various people found the company, and then you'd replace them as quickly as possible with professional CEOs, professional management. And there are variations of this that happened with Netscape, and Yahoo, and even Google, all these companies.

  • The Gen X people founded them. The baby boomers came along and took over the companies and stole them from the Gen X founders in the 90s. In the 2000s, when the millennials founded the companies, they were given more of an opportunity, and it made a big difference.

  • The Facebook story I always tell is, it was 2006, two years in, Zuckerberg was like 22 years old, and we got a $1 billion offer to sell the company to Yahoo. And we had a board meeting. There were three of us, and we thought we should at least talk about it.

  • It was a lot of money. Zuckerberg would make $250 million, and it was sort of an eight-hour-long discussion, and he didn't know what he'd do with the money. And he'd just start another social networking company. He kind of liked the one he had, and he didn't know what else he would do, and so he really didn't want to sell. And if you had a professional CEO, it would have just been, man, I can't believe they're offering us $1 billion, and I'm going to try not to be too eager, and we better take the money and run. And getting that one thing right makes a big difference.

  • Let me ask you a different question. All of these individuals had a huge impact on society and have an enormous individual power. And I think one of the things that you've argued in this book and that you've argued over the years is that we need to give them that power.

  • We need to offer them a latitude that in many ways we don't offer others.

  • Well, I think one of the frames I always have is that there are many ways in which the United

  • States, the developed countries, have been relatively stagnant for the last 50 years.

  • Progress has slowed. We've had progress in computers, Internet software, and many other domains. Things have kind of stalled out, and it sort of manifests in low economic growth in the sense that the younger generation is going to have a tough time doing as well as their parents. And there is sort of this way that there has been this broad stagnation for 40, 50 years, and we need to find ways to do new things. I don't think tech startup companies are the only ways to do them. That is a vehicle for doing it. And yeah, if you don't allow these companies to have a certain latitude and flexibility to try to do new things, we shut it down right away. The stagnation will be worse than ever.

  • Okay, but here's a separate almost philosophical question. I'm going to read back something you said to The New Yorker. There was a piece about Sam Altman. This is right around actually when OpenAI began, 2016. And I think it actually might even be representative of how you might think about Mark Zuckerberg or Elon Musk or some of these other kinds of major players.

  • This is what you said. You said, Sam's program for the world is anchored by ideas, not people.

  • And that's what makes it powerful, because it doesn't immediately get derailed by questions of popularity. And I thought that that was actually very indicative of most of the people that you have invested in. It's really been about ideas, and in some ways, you could even argue is disconnected from people.

  • I think it is really about a whole wide... People, they're able to think about a wide spectrum of things. They're able to think about... Good founders have theories of how to hire people, how to manage them, how to build teams. They have theories about where the culture of the society's work going. They have technical things about the product, the design. They have ideas about how they should market their company. So they're sort of polymaths who are able to think about a lot of these things. But yeah, I'm biased towards a lot of the ones where it's more intellectual. But I think that quote has held up pretty well with Sam Altman. Maybe he needed to pay a little bit more attention to the board and things like that. There was probably a people dimension that he had ignored a little bit too much in November 2016.

  • Since we're on the Sam Altman of it all, and since Sam was here yesterday, I'm so curious.

  • You were a mentor of his. What do you think of open AI? What do you think of AI more broadly right now? I mean, are we in a bubble? Is this the future? What is this?

  • That's a broad question. I think I'm always hesitant to talk about it because I feel there's so many things I would have said about AI where I would have been very wrong two, three years ago. So maybe I'll start by just saying a little bit about the history of what people thought was going to happen, and then the surprising thing that open AI achieved that did happen. If you had this debate in the 2010s, there was sort of one maybe frame in terms of two paradigms, two books. There was the Bostrom book, Superintelligence 2014, which is that AI was going to build this godlike superhuman intelligence. It was heading towards this godlike oracle. That was what AI was going to be. And then there was the Kai-Fu

  • Lee rebuttal 2018, AI superpowers, which was sort of where the CCP rebuttal to Silicon

  • Valley that no, AI is not about godlike intelligence. That's a science fiction fantasy Silicon

  • Valley has. AI is going to be about machine learning, data collection. It's not conscious.

  • It's not any of these weird things. It's surveillance tech. And China is going to beat the US in the race for AI because we have no qualms about sort of this totalitarian, not the word he used, collection of data in our society. And that was sort of the way the AI debate got framed. And then the thing I always said was, man, it's just such a weird word. It means all these different things. It's annoyingly undefined. But then the sort of surprising and strangely unexpected thing that happened is that, in some sense, what open AI with chat GPT 3.54 achieved in late 22, early 23 was you passed the Turing test, which was not superintelligence. It's not godlike. It's not low tech surveillance. But that had been the Holy Grail of AI for 60 or 70 years. And it's a fuzzy line. The Turing test is you have a computer that can convince you that it's a human being. And it's a somewhat fuzzy line. But it pretty clearly hadn't been passed before. It pretty clearly is passed now. And that's a really extraordinary achievement. It raises all sorts of interesting, big picture questions. What does it mean to be a human being in 2024? The placeholder answer I would have been tempted to give a couple of years ago would be something like the Noam Chomsky idea that something very important about language, this is what sets humans apart from all the other animals. We talk to each other. And we have these rich semantic syntax things.

  • And so if a computer can replicate that, what does that mean for all of us in this room?

  • And so it's an extraordinary development. And it was also somehow, even though it had been the Holy Grail, some of the last decade before, it was not expected at all. And so there's something very significant about it and very underrated. And then, of course, you get all these questions about, is it going to, the econ one question, is it a compliment?

  • Is it going to make people more productive? Or is it a substitute good where it's going to replace?

  • What do you think of all of this and how bullish as an investor are you on this? And what do you think our society is? When you hear Sam Altman talk about this, you say he's right.

  • That's what it's going to be. Do you think it's going to be something else? You lived through 1999. There's some people who say this is a hype cycle. Other people say this is the future.

  • Well, I'm very anchored on the 99 history. And I somehow always like to say that 99 was both. It was, you know, the peak of the bubble was also, in a sense, the peak of clarity.

  • People had realized the new economy was going to replace the old economy. The Internet was going to be the most important thing in the 21st century. And people were right about that. And then the specific investments were incredibly hard to make. And even the no-brainer market leader. So, you know, if you said 1999, the no-brainer investment would have been

  • Amazon stock. It's a leading e-commerce company. And they're going to scale and they'll get bigger.

  • And it peaked in December 99 at $113 a share. It was $5.5 in October 2001, 22 months later.

  • You then had to wait until the end of 2009 to get back to the 99 highs. And then if you'd waited until today, you would have made 25 times your money from 99. You would have first lost – you would have gone down 95% and then made 500X. So even the no-brainer investment from 99 was wickedly tricky to pull off in retrospect.

  • And I sort of think that AI, the LLM form of AI –

  • These are the large language models.

  • Large language models.

  • Open AIs of the world.

  • Again, that's – passing the Turing test, I think it's roughly on the scale of the

  • Internet. And so it's an incredibly important thing. It's going to be very important socially, politically, philosophically, about all these questions about meaning. And then the financial investment question I find unbelievably hard and confusing. And yeah, it's probably quite tricky.

  • If I had to – if you had to sort of concretize it, one thing that's very strange about the – if you sort of just follow the money, at this point 80% to 85% of the money in AI is being made by one company. It's NVIDIA. And so it's all on this sort of very weird hardware layer, which Silicon Valley doesn't even know very much about anymore. We don't really do hardware – we don't do silicon chips in Silicon Valley anymore. I get pitched on these companies once every three or four years and it's always, I have no clue how to do this. It sounds like a pretty good idea, but man, I have no clue and we never invest.

  • And so – and then there's sort of this theory that the hardware piece makes the money initially, then gets more commodified over time and it'll shift to software. And the – I don't know, the multi-trillion dollar question, is that going to be true again this time or will NVIDIA sort of have this incredible monopoly position?

  • And what's your bet at the moment?

  • I suspect NVIDIA will – I think it will maintain its position for a while. I think the game theory on it is something like all the big tech companies are going to start to try to design their own AI chips so they don't have to do the 10x markup to NVIDIA.

  • And then how hard is it for them to do it? How long will it take? If they all do it, then the chips become a commodity and nobody makes money in chips. And so then do you go into hardware and you should do it if nobody else is doing it. If everybody does it, you shouldn't do it. And then maybe – I'm not sure how that nets out, but probably people stay stuck for a while and NVIDIA goes from strength to strength for a while.

  • I have a related but maybe personal question for you. You happen to have this very interesting relationship with Sam Altman and then also a very interesting relationship with Elon Musk.

  • You both worked at PayPal. You famously were part of a coup effectively to push Elon Musk out of the company. You're now friends with him all over again and have a stake in SpaceX.

  • You can maybe walk us through that friendship. We had some rough moments in 2000-2001.

  • We can get into that if you want, but where I was going to go with this actually is one of the things that's been fascinating and fascinating to the Valley and I think to the rest of the country has been the commentary we've heard from Elon Musk who helped build

  • OpenAI with Sam and the break actually between the two of them as creating this not-for-profit and what's happened to it. In fact, Elon Musk originally sued Sam earlier this year and then dropped the suit recently. But how do you think about this idea of a company that was started as a not-for-profit and all of the safety concerns and things that you hear from Elon on one side and Sam on the other?

  • Man, it's whichever person I talked to last I find the most convincing probably. So, you know, I talked to Elon about it and he made this argument. It's just completely illegal for a non-profit to become a for-profit company because otherwise everyone would set up companies as non-profits and take advantage of the tax laws and then you turn them into a for-profit and this is the most obvious arb and they just can't be allowed to do this. It's obviously just totally illegal what Sam's trying to do at OpenAI. And then like half an hour after the conversation was over, at the moment, it's like, oh, that's a really strong argument.

  • And then half an hour later, it's like, but, you know, the whole history of OpenAI is that the biggest handicap they had was a non-profit and it led to all these crazy conflicting things culminating in this non-profit board that thought it was better to shut down the company or the whole venture, whatever you want to call it, rather than keep going. And nobody is ever going to take the lesson from OpenAI to start a non-profit and turn it into a for-profit later given what a total disaster that was. But yeah, whoever I listened to last I find the most compelling.

  • Let me ask you a different question. You left Silicon Valley. You have now moved to Los

  • Angeles. That's your home.

  • We left San Francisco specifically. Yeah.

  • San Francisco specifically.

  • It was, it just felt it was time to get out.

  • So tell us why it was time to get out because I think a lot of the issues that actually we read about whether around OpenAI and some of the culture issues at a lot of these companies are the reason you decided you didn't want to live there anymore.

  • It was, man, it's hard to, it's a bunch of things that came together, but it was, there was a sense that it was sort of the ground zero of the most unhinged place in the country.

  • It was, you had this catastrophic homeless problem, which maybe is not the most important problem, but sort of, and it was never getting better. You had, it was by 2018 when we moved to LA, it felt like it had become extraordinarily self-hating where everybody who was not in tech hated the tech industry. This would be, this is very odd. It would be like the people in Houston hating oil or people in Detroit hating cars, you know, people in New York hating finance. And so it had this unhinged, self-hating character in the city itself.

  • And there were all these things that seemed extraordinarily unhealthy. And if you asked me in 2021, you know, I would have said, man, they are finally, you know, yes, they're sitting on the biggest, you know, they created all this wealth and yet they are going to succeed in committing suicide. Three years later, you know, I think the jury is a little bit more out because maybe the AI revolution is big enough that it will save even the most, you know, I don't know, the most ridiculously mismanaged city in the country.

  • It seemed to me, I thought that part of the issue that you had with San Francisco was the politics of it. And not just the politics of it, but how politics had seeped into the culture of so many of the companies and feeling, I think that you thought that it had moved in a very progressive way.

  • Yeah, that's always a very clear dimension of it. But, and that's sort of the tip of the iceberg. That's the part that's above the surface that people always focus on. And then the part that's below the surface is just the deep corruption, the mismanagement of the schools, the buses, all the public services, the way things don't work, the way the zoning is the most absurd in the country. You know, there was, I don't know, there was a house I was looking to buy where you couldn't build access into the garage. And Gavin Newsom, who was the Lieutenant Governor of California at the time, said he'd help me get a garage access permit. Again, it's not clear that's what the Lieutenant Governor of the fifth largest economy in the world should be doing. But he said he knew how to do this in San

  • Francisco, and it was circa 2013. And then, you know, you needed to get it, you needed to get the neighbors to sign off, which was maybe doable. And then you needed to go to the Board of Supervisors because you had to build a staircase, and it was a public walkway, and the whole public had to comment. Nobody knew what happened then. But then even harder, a tree had grown where the driveway was supposed to be, and you needed a tree removal permit. And this was the sort of thing that you would never get. And so you can describe all this as like crazy left-wing ideology, but I think it's more, you know, it's more like, you know, really, really deep corruption. And then this is, you know, this is in a way the San Francisco problem, it's the California problem. The analogy I have, if you want to think about the economy of California, in some ways it's analogous to Saudi Arabia.

  • You have these, you know, you have a very mismanaged state government. There's a lot of insane ideology that goes with it. But you have these incredible gushers called the big tech companies. And then there's a way the super insane governance is linked to the gold rush of the place. And so, yeah, there's sort of a, there's some point where it'll be too crazy even for California, but California can get away with a lot of stuff you wouldn't get away with elsewhere. San Francisco, my judgment, had gone a little bit too far. Maybe the AI thing is, you know, they found one more giant gusher, and, you know, and maybe you don't have any Saudi money in your fund, I hope.

  • Virtually none, no.

  • Just in case. Here's a different question, though, because it gets to the politics of this, which is, there's been, it seems like a shift inside Silicon Valley, and a shift in terms of even the way the companies are managed around, in a political dimension. And you were very outspoken, obviously, you supported President Trump in the last go around. But speak to what do you think, and I want to get to that part too, but I want you to speak first to the shift in the Valley, at least what seems like a shift, perception wise, from being a very progressive place to maybe less so. Maybe not, maybe it's just the, you know,

  • Larry Summers and I spoke this afternoon, and he said there's, you know, 10 people he thinks are very loud on Twitter. And that's why the world thinks, you know, that between

  • David Sachs and, you know, a bunch of other people, and Elon Musk, that's not representative, and I think you may have a different view.

  • Well, you know, I don't think, you know, I don't think you'll get a majority of tech people to support Trump over Biden or anything like that. I think you'll get way more than you had four or eight years ago. So, you know, I don't know if you're measuring a relative shift or an absolute number. Those are probably two different measures on that. But I would say that if we ask a very different question about, let's say, you know, extreme wokeness, or I don't even know what you're supposed to call it, there is probably, you know, a broad consensus among the good tech founders, startup CEOs, people across a pretty broad range that it's gone way too far. I talked to a lot of these people, a lot of them are, you know, I'd say more centrist Democrats, but it is just, you know, we need to have a secret plan to fight this. And they are, what they tell me behind closed doors is way, way tougher than what they dare say in public. And so it is like, you know, we need to have a plan to hire fewer people from San Francisco, because that's where the people, the employees are the crazy. So if you want to have a less woke workforce, we need to, you know, we're going to have targets about how we steadily move our company out of San Francisco, specifically.

  • And yeah, these are the sort of conversations that I've...

  • And do you agree with this? And by the way, let me just read, you probably know Alex Outswang,

  • Scale AI CEO.

  • Yes.

  • Who said that he's put together what he calls a merit-based hiring program. He said he's getting rid of DEI. It says hiring on merit will be a permanent policy at scale. It's a big deal whenever we invite someone to join our mission. And those decisions have never been swayed by orthodoxy or virtue signaling or whatever the current thing is. I think of our guiding principle as MEI, merit, excellence, and intelligence. Bill Ackman went on to say that he thinks DEI is actually inherently a racist and illegal movement.

  • Yeah, I, again, my feel for it is there aren't that many people who are willing to say what

  • Alex says, but I think there are an awful lot of people who are pretty close to thinking this, that there were ways they leaned into the DEI thing. It was like an anti-Trump thing.

  • Everything was sort of polarized around Trump for the last four years of his presidency.

  • And so you have to demonstrate that you're anti-Trump by being even more pro-DEI. That's of course not necessarily a logical thing. But yes, people somehow ended up in this place that was very different. And then, you know, there probably, there always are questions what drove the DEI movement, the wokeness in these companies. And it probably is over-determined.

  • You know, there probably is a, there's a bottom-up, you know, woke millennial people who were brainwashed into DEI in their colleges. That's sort of the bottom-up theory. There's sort of a, I don't know, there's sort of a cynical corporate version where this is, you know, the leadership of the company either believed it or used it as sort of a, as a way to manage and control their companies in certain ways. You know, the part that I always feel is a little bit underestimated is there was probably also some top-down level from a government regulatory point of view where, you know, if you, if you don't do DEI, there is some point where you get, you do get in trouble. You know, if you, if, you know, I don't know.

  • This is part of the ESG movement now. I mean, look, we talked, we talked about ESG here for a long time.

  • There was an ESG movement and then there were probably all these, these governmental versions.

  • And so, I don't know, this, this would be probably, if my candidate for the company in Silicon Valley is still probably the most woke, would be, would be something like, like

  • Google. And it's less woke than it was two, three years ago, but in some ways, you know, they have a total monopoly in search. And so, there's sort of some way in which, you know, if wokeness is a luxury good, like you can afford it more if you're a monopoly than if you're not.

  • And then, and then the problem for, for Google as a pretty big monopoly is that it's always going to be, you know, subject to a lot more regulatory pressure from the government. And so, if you have something like the Gemini, the Gemini AI engine and, you know, and it's sort of, it's sort of this comical absurdist thing where it generates these black women

  • Nazis, you know, and you're supposed to find famous, famous Nazis and then the diversity criterion gets applied across the board. And so, it just generates fake black women who are Nazis, which is, you know, a little bit too progressive, I think.

  • But, but, but then, but then if you think of it in terms of this larger political context,

  • Google will never get in trouble for that. The FTC will never sue them for misinformation or anything like that. That's not, that, that stuff does not get fact checked. You don't really get in trouble. And you probably even get some protection where, okay, you know, you are, you're going along with the woke directives from the ESG people or the government.

  • Maybe you overdid it a little bit, but we trust you to be good at other things. So, there may be a very different calculus if you're a sort of a large quasi-regulated monopoly.

  • Let me ask you about large quasi-regulated monopolies and also concentration, but I want to read you, this is something you actually wrote in your book 10 years ago about Google and it being a monopoly. You said, since it doesn't have to worry about competing with anyone, it has wider latitude to care about its workers, its products, and its impact on the wider world. Google's motto, don't be evil, is in part a branding ploy, but it's also characteristic of a kind of business that's successful enough to take ethics seriously without jeopardizing its own existence. In business, money is either an important thing or it's everything. Monopolists can't afford to think about things other than making money.

  • Non-monopolists can't. In a perfect competition, a business is so focused on today's margin that it can't possibly plan for a long-term future. Only one thing can allow a business to transcend the daily brute struggle for survival, monopoly profits. Were you writing in favor then of the monopoly idea or against?

  • Oh, I will, well, my book was giving you advice for what to do and from the inside, you always want to do something like what Google did. If you're starting a company, competition is for losers or capitalism and competition, people always say they're synonyms. I think they're antonyms because if you have perfect competition, you compete away all the capital.

  • If you want to have Darwinian competition, red in tooth and claw, you should open a restaurant.

  • It's like an awful, awful business. You will never make any money. It's perfectly competitive and completely non-capitalist. So from the inside, you want to always go for something like monopoly.

  • And then, yes, there are, in other parts of my book, I also qualify it that there are dynamic monopolies that invent something new, that create something new for the world and we reward them with patents or things like that that they get. And then at some point, there's always a risk that these monopolies go bad, that they become like a troll collecting a toll at a bridge, that they're not dynamic and that they sort of become fat and lazy.

  • Are we there yet? I mean, Lena Kahn, if she was sitting here, would say, we got there a long time ago.