Placeholder Image

Subtitles section Play video

  • This is what hundreds of millions of gamers in the

  • world plays on. It's a GeForce.

  • This is the chip that's inside.

  • For nearly 30 years.

  • Nvidia's chips have been coveted by gamers shaping

  • what's possible in graphics and dominating the entire

  • market since it first popularized the term

  • graphics processing unit with the GeForce 256.

  • Now its chips are powering something entirely

  • different.

  • ChatGPT has started a very intense conversation.

  • He thinks it's the most revolutionary thing since

  • the iPhone.

  • Venture capital interest in AI startups has skyrocketed.

  • All of us working in this field have been optimistic

  • that at some point the broader world would

  • understand the importance of this technology.

  • And it's it's actually really exciting that that's

  • starting to happen.

  • As the engine behind large language models like

  • ChatGPT, Nvidia is finally reaping rewards for its

  • investment in AI, even as other chip giants suffer in

  • the shadow of U.S.-China trade tensions and an ease

  • in the chip shortage that's weakened demand.

  • But the California-based chip designer relies on

  • Taiwan Semiconductor Manufacturing Company to

  • make nearly all its chips, leaving it vulnerable.

  • The biggest risk is really kind of U.S.-China relations

  • and the potential impact to TSMC.

  • That's, if I'm a shareholder in Nvidia,

  • that's really the only thing that keeps me up at

  • night.

  • This isn't the first time Nvidia has found itself

  • teetering on the leading edge of an uncertain

  • emerging market.

  • It's neared bankruptcy a handful of times in its

  • history when founder and CEO Jensen Huang bet the

  • company on impossible seeming ventures.

  • Every company makes mistakes and I make a lot of them.

  • And some of them, some of them puts the company in

  • peril. Especially in the beginning, because we were

  • small and and we're up against very, very large

  • companies and we're trying to invent this brand new

  • technology.

  • We sat down with Huang at Nvidia's Silicon Valley

  • headquarters to find out how he pulled off this

  • latest reinvention and got a behind-the-scenes look at

  • all the ways it powers far more than just

  • gaming.

  • Now one of the world's top ten most valuable companies,

  • Nvidia is one of the rare Silicon Valley giants that,

  • 30 years in, still has its founder at the helm.

  • I delivered the first one of these inside an AI

  • supercomputer to OpenAI when it was first created.

  • 60-year-old Jensen Huang, a Fortune Businessperson of

  • the Year and one of Time's most influential people in

  • 2021, immigrated to the U.S .

  • from Taiwan as a kid and studied engineering at

  • Oregon State and Stanford.

  • In the early 90s, Huang met fellow engineers Chris

  • Malachowsky and Curtis Priem at Denny's, where they

  • talked about dreams of enabling PCs with 3D

  • graphics, the kind made popular by movies like

  • Jurassic Park at the time.

  • If you go back 30 years, at the time, the PC revolution

  • was just starting and there was quite a bit of debate

  • about what is the future of computing and how should

  • software be run.

  • And there was a large camp and rightfully so, that

  • believed that CPU or general purpose software was

  • the best way to go.

  • And it was the best way to go for a long time.

  • We felt, however, that there was a class of

  • applications that wouldn't be possible without

  • acceleration.

  • The friends launched Nvidia out of a condo in Fremont,

  • California, in 1993.

  • The name was inspired by N .V.

  • for next version and Invidia, the Latin word for

  • envy. They hoped to speed up computing so much,

  • everyone would be green with envy.

  • At more than 80% of revenue, its primary

  • business remains GPUs.

  • Typically sold as cards that plug into a PC's

  • motherboard, they accelerate - add computing

  • power - to central processing units, CPUs, from

  • companies like AMD and Intel.

  • You know, they were one among tens of GPU makers at

  • that time. They are the only ones, them and AMD

  • actually, who really survived because Nvidia

  • worked very well with the software community.

  • This is not a chip business.

  • This is a business of figuring out things end to

  • end.

  • But at the start, its future was far from guaranteed.

  • In the beginning there weren't that many

  • applications for it, frankly, and we smartly

  • chose one particular combination that was a home

  • run. It was computer graphics and we applied it

  • to video games.

  • Now Nvidia is known for revolutionizing gaming and

  • Hollywood with rapid rendering of visual effects.

  • Nvidia designed its first high performance graphics

  • chip in 1997.

  • Designed, not manufactured, because Huang was committed

  • to making Nvidia a fabless chip company, keeping

  • capital expenditure way down by outsourcing the

  • extraordinary expense of making the chips to TSMC.

  • On behalf of all of us, you're my hero.

  • Thank you. Nvidia

  • today wouldn't be here if and nor nor the other

  • thousand fabless semiconductor companies

  • wouldn't be here if not for the pioneering work that

  • TSMC did.

  • In 1999, after laying off the majority of workers and

  • nearly going bankrupt to do it, Nvidia released what it

  • claims was the world's first official GPU, the

  • GeForce 256.

  • It was the first programable graphics card

  • that allowed custom shading and lighting effects.

  • By 2000, Nvidia was the exclusive graphics provider

  • for Microsoft's first Xbox.

  • Microsoft and the Xbox happened at exactly the time

  • that we invented this thing called the programable

  • shader, and it defines how computer graphics is done

  • today.

  • Nvidia went public in 1999 and its stock stayed largely

  • flat until demand went through the roof during the

  • pandemic. In 2006, it released a software toolkit

  • called CUDA that would eventually propel it to the

  • center of the AI boom.

  • It's essentially a computing platform and

  • programing model that changes how Nvidia GPUs

  • work, from serial to parallel compute.

  • Parallel computing is: let me take a task and attack it

  • all at the same time using much smaller machines.

  • Right? So it's the difference between having an

  • army where you have one giant soldier who is able to

  • do things very well, but one at a time, versus an

  • army of thousands of soldiers who are able to

  • take that problem and do it in parallel.

  • So it's a very different computing approach.

  • Nvidia's big steps haven't always been in the right

  • direction. In the early 2010s, it made unsuccessful

  • moves into smartphones with its Tegra line of

  • processors.

  • You know, they quickly realized that the smartphone

  • market wasn't for them, so they exited right from that

  • .

  • In 2020, Nvidia closed a long awaited $7 billion deal

  • to acquire data center chip company Mellanox.

  • But just last year, Nvidia had to abandon a $40 billion

  • bid to acquire Arm, citing significant regulatory

  • challenges. Arm is a major CPU company known for

  • licensing its signature Arm architecture to Apple for

  • iPhones and iPads, Amazon for Kindles and many major

  • carmakers.

  • Despite some setbacks, today Nvidia has 26,000

  • employees, a newly built polygon-themed headquarters

  • in Santa Clara, California, and billions of chips used

  • for far more than just graphics.

  • Think data centers, cloud computing, and most

  • prominently, AI.

  • We're in every cloud made by every computer company.

  • And then all of a sudden one day a new application

  • that wasn't possible before discovers you.

  • More than a decade ago, Nvidia's CUDA and GPUs were

  • the engine behind AlexNet, what many consider AI's Big

  • Bang moment. It was a new, incredibly accurate neural

  • network that obliterated the competition during a

  • prominent image recognition contest in 2012.

  • Turns out the same parallel processing needed to create

  • lifelike graphics is also ideal for deep learning,

  • where a computer learns by itself rather than relying

  • on a programmer's code.

  • We had the good