NVIDIA at GTC October: Accelerating AI and Computing Innovation

Published 28/10/2025, 20:02
© Reuters

On Tuesday, 28 October 2025, NVIDIA Corporation (NASDAQ:NVDA) presented its strategic vision at the GTC October 2025 Keynote. CEO Jensen Huang highlighted the company’s pioneering efforts in accelerated computing and artificial intelligence (AI), unveiling new partnerships and technologies. While NVIDIA’s advancements promise significant growth, the competitive landscape remains challenging.

Key Takeaways

  • NVIDIA introduced a new computing model, the first in sixty years, emphasizing the importance of its CUDA platform.
  • Strategic partnerships include Nokia for 6G technology and the Department of Energy for AI supercomputers.
  • NVIDIA’s Blackwell platform is projected to generate $500 billion in revenue through 2026.
  • The company is expanding into AI factories, quantum computing, and robotics.

Financial Results

  • Cumulative Blackwell and early Rubin ramps are forecasted to reach $500 billion through 2026.
  • NVIDIA expects to ship 6 million Blackwell units in the first five quarters of production.
  • Blackwell’s growth rate is five times that of its predecessor, Hopper.

Operational Updates

  • NVIDIA announced a partnership with Nokia to develop 6G technology using its ARC platform.
  • Collaborating with the Department of Energy to build seven AI supercomputers.
  • Launched NVQ link, an interconnect architecture for quantum processors and NVIDIA GPUs.
  • Partnered with CrowdStrike for enhanced cybersecurity and with Palantir to accelerate data processing.
  • Showcased the NVIDIA DRIVE Hyperion platform for autonomous vehicles with partners like Lucid and Mercedes Benz.

Future Outlook

  • NVIDIA aims to reduce the cost of AI token generation through extreme co-design.
  • The company is expanding into new markets, including 6G, quantum computing, and AI factories.
  • CEO Jensen Huang emphasized the transformative role of AI, stating, "AI is not a tool. AI is work."

Readers are encouraged to refer to the full transcript for more detailed insights into NVIDIA’s strategic initiatives.

Full transcript - GTC October 2025 Keynote:

Unidentified speaker, Narrator: invention shaped destiny and technology helped dreams take flight. At Bell Labs, the transistor was born, sparking the age of semiconductors and giving rise to Silicon Valley. Hetty Lamar reimagined communication, paving the way for wireless connectivity. IBM’s System three sixty put a universal computer at the heart of industry. Intel’s microprocessor drove the digital age forward, and Cray’s supercomputers expanded the frontiers of science.

Unidentified speaker, Unidentified: So we think we’re at the beginning of something with this technology, and we’re gonna grow just as fast as we can.

Unidentified speaker, Narrator: Apple made computing personal. Hello. I am Macintosh. Microsoft opened the window to a new world of software. Long before the web You’ve got mail.

US government researchers built ARPANET, connecting the first computers, the foundation for the Internet. An iPod. A phone. Are you getting it? Then Apple again.

Put a thousand songs in your pocket and the Internet in your hand. Every era, a leap. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard. Every leap, America led. But by a leap for vantage.

Now the next era is here, launched by a revolutionary new computing model. This is likely going to be the most important contribution we’ve made to the computer industry. It will likely be recognized as a revolution. Machine learning is a branch of artificial intelligence, computers that almost appear to think. The amount of computational resource is ultimately what’s gonna turbocharge this field.

Artificial intelligence, the new industrial revolution. At its heart, NVIDIA GPUs invented here in America. Like electricity and the Internet, AI is essential infrastructure. Every company will use it. Every nation will build it.

Winning this competition will be a test of our capacities unlike anything since the dawn of the space age. And today, AI factories are rising, built in America for scientists, engineers, and dreamers across universities, startups, and industry. I think we wanna try to reach new heights as a civilization, discovering the nature of the universe. And now, American innovators are clearing the way for abundance, saving lives, shaping vision into reality, lending us a hand, and delivering the future. We will soon power it all with unlimited clean energy.

We will extend humanity’s reach to the stars. This is America’s next Apollo Apollo moment. Together, we take the next great leap to boldly go where no one has gone before. And here is where it all begins. Welcome to the stage, NVIDIA founder and CEO, Jensen Huang.

Washington DC. Washington DC, welcome to GTC. It’s hard not to be sentimental and proud of America. I gotta tell you that. Was that video amazing?

Thank you. NVIDIA’s creative team does an amazing job. Welcome to GTC. We have a lot to cover with you today. GTC is where we talk about industry, science, computing, the present, and the future.

So I’ve got a lot to cover with you today, but before I start, I want to thank all of our partners who helped sponsor this great event. You’ll see all of them around the show. They’re here to meet with you and, really great. We couldn’t do what we do without all of our ecosystem partners. This is the Super Bowl of AI, people say.

And therefore, every Super Bowl should have an amazing pregame show. What do you guys think about the pregame show and our all all star all star athletes and all star cast? Look at these guys. Somehow, I turned out the buffest. What do you guys think?

I don’t know if I had something to do with that. NVIDIA invented a new computing model for the first time in sixty years, as you saw in the video. A new computing model rarely comes about. It takes an enormous amount of time and set of conditions. We observed, we invented this computing model because we wanted to solve problems that general purpose computers, normal computers, could not.

We also observed that someday, transistors will continue. The number of transistors will grow, but the performance and the power of transistors will slow down, that Moore’s Law will not continue, limited by the laws of physics. And that moment has now arrived. Dennard scaling has stopped. It’s called Dennard scaling.

Dennard scaling has stopped nearly a decade ago, and in fact, the transistor performance and its power associated has slowed tremendously. And yet, the number of transistors continued. We made this observation a long time ago, and for thirty years, we’ve been advancing this form of computing we call accelerated computing. We invented the GPU, we invented the programming model called CUDA, and we observed that if we could add a processor that takes advantage of more and more and more transistors, apply parallel computing, add that to a sequential processing CPU, that we could extend the capabilities of computing well beyond, well beyond, and that moment has really come. We have now seen that inflection point.

Accelerated computing, its moment has now arrived. However, accelerated computing is a fundamentally different programming model. You can’t just take a CPU software, software written by hands, executing sequentially, and put it onto a GPU and have it run properly. In fact, if you just did that, it actually runs slower. And so you have to re reinvent new algorithms.

You have to create new libraries. You have to, in fact, rewrite the application, which is the reason why it’s taken so long. It’s taken us nearly thirty years to get here, but we did it one domain at a time. This is the treasure of our company. Most people talk about the GPU.

The GPU is important, but without a programming model that sits on top of it and without dedication to that programming model, keeping it compatible over generations. We’re now CUDA 13 coming up with CUDA 14. Hundreds of millions of GPUs running in every single computer perfectly compatible. If we didn’t do that, then developers wouldn’t target this computing platform. If we didn’t create these libraries, then developers wouldn’t know how to use the algorithm and use the architecture to its fullest, one application after another.

I mean, this is really the treasure of our company. COOLitho, computational lithography. It took us nearly seven years to get here with COOLitho, and now TSMC uses it, Samsung uses it, ASML uses it. This is an incredible library for computational lithography, the first step of making a chip. Sparse solvers for CAE applications.

Kuop, a numerical optimization that has broken just about every single record. The traveling salesperson problem, how to connect millions of products with millions of customers in the supply chain. Warp, Python solver for CUDA, for simulation. QDF, a data frame approach, basically accelerating SQL, databases. This library is the one that started AI altogether, cuDNN.

The library on top of it called Megatron Core made it possible for us to simulate and train extremely large language models. The list goes on. Monai, really, really important. The number one medical imaging AI framework in the world. By the way, we’re not going to talk a lot about health care today, but be sure to see Kimberly’s keynote.

She’s going to talk a lot about the work that we do in health care. The And list goes on. Genomics processing. Ariel, pay attention. We’re going do something really important here today.

QUQuantum, quantum computing. This is just a representative of three fifty different libraries in our company, and each one of these libraries redesigned the algorithm necessary for accelerated computing. Each one of these libraries made it possible for all of the ecosystem partners to take advantage of accelerated computing, and each one of these libraries opened new markets for us. Let’s take a look at what CUDA X can do.

Everything you saw was a simulation. There was no art, no animation. This is the beauty of mathematics. This is deep computer science, deep math, and it’s just incredible how beautiful it is. Every industry was covered, from healthcare and life sciences to manufacturing, robotics, autonomous vehicles, computer graphics, even video games.

That first shot that you saw was the first application NVIDIA ever ran, and that’s where we started in 1993. And we kept believing in what we were trying to do, and it took it’s hard to imagine that you could see that first virtual fighter scene come alive, and that same company believed that we would be here today. It’s just a really, really incredible journey. I want to thank all the NVIDIA employees for everything that you’ve done. It’s really incredible.

We have a lot of industries to cover today. I’m going to cover AI, six g, quantum, models, enterprise computing, robotics, and factories. Let’s get started. We have a lot to cover, a lot of big announcements to make, a lot of new partners that would very much surprise you. Telecommunications is the backbone, the lifeblood of our economy, our industries, our national security.

And yet, ever since the beginning of wireless, where we defined the technology, we defined the global standards, we exported American technology all around the world so that the world can build on top of American technology and standards. It has been a long time since that’s happened. Wireless technology around the world, largely today, deployed on foreign technologies. Our fundamental communication fabric built on foreign technologies. That has to stop, and we have an opportunity to do that, especially during this fundamental platform shift.

As you know, computer technology is at the foundation of literally every single industry. It is the single most important instrument of science. It’s the single most important instrument of industry. And I just said, we’re going through a platform shift. That platform shift should be the once in a lifetime opportunity for us to get back into the game, for us to start innovating with American technology.

Today, today, we’re announcing that we’re gonna do that. We have a big partnership with Nokia. Nokia is the second largest telecommunications maker in the world. It’s a $3,000,000,000,000 industry. Infrastructure is hundreds of billions of dollars.

There are millions of base stations around the world. If we could partner, we could build on top of this incredible new technology, fundamentally based on accelerated computing and AI, and for United States, for America to be at the center of the next revolution in six g. So today, we’re announcing that NVIDIA has a new product line. It’s called the NVIDIA ARC, the aerial radio network computer. Aerial RAN computer, ARC.

ARC is built from three fundamental new technologies, The gray CPU, the Blackwell GPU, and our ConnectX, Mellanox ConnectX networking designed for this application. And all of that makes it possible for us to run this library, this CUDA X library that I mentioned earlier called Ariel. Ariel is essentially a wireless communication system running on top of CUDA X. We’re create, going for the first time, a software defined programmable computer that’s able to communicate wirelessly and do AI processing at the same time. This is completely revolutionary.

We call it NVIDIA Arc. And Nokia is going to work with us to integrate our technology, rewrite their stack. This is a company with 7,000 fundamental, essential five g patents. Hard to imagine any greater leader in telecommunications. So we’re going to partner with Nokia.

They’re going to make NVIDIA Arc their future base station. NVIDIA Arc is also compatible with AirScale, the current Nokia base stations. So what that means is we’re going to take this new technology, and we’ll be able to upgrade millions of base stations around the world with six gs and AI. Now, six gs and AI is really quite fundamental in the sense that for the first time, we’ll be able to use AI technology, AI for RAN, to make radio communications more spectral efficient, using artificial intelligence, reinforcement learning, adjusting the beamforming in real time in context, depending on the surroundings and the traffic and the mobility, the weather, all of that could be taken into account so that we could improve spectral efficiency. Spectral efficiency consumes about one and a half to 2% of the world’s power.

So improving spectral efficiency not only improves the amount of data we can put through wireless networks without increasing the amount of energy necessary. The other thing that we could do, AI for RAN is AI on RAN. This is a brand new opportunity. Remember, the Internet enabled communications, But amazingly smart companies, AWS, built a cloud computing system on top of the Internet. We are now going to do the same thing on top of the wireless telecommunications network.

This new cloud will be an edge industrial robotics cloud. This is where AI on RAN, the first is AI for RAN to improve radio spectrum efficiency. The second is AI on RAN, essentially cloud computing for wireless telecommunications. Cloud computing will be able to go right out to the edge where data centers are not because we have base stations all over the world. This announcement is really exciting.

Justin Hodar, the CEO, I think he’s somewhere in the room, thank you for your partnership. Thank you for helping The United States bring telecommunication technology back to America. This is really a fantastic partnership. Thank you very much. That’s the best way to celebrate Nokia.

Let’s talk about quantum computing. 1981, particle physicist, quantum physicist, Richard Feynman, imagined a new type of computer that can simulate nature directly, to simulate nature directly, because nature is quantum. He called it a quantum computer. Forty years later, the industry has made a fundamental breakthrough. Forty years later, just last year, a fundamental breakthrough.

It is now possible to make one logical qubit. One logical qubit. One logical qubit that’s coherent, stable, and error corrected. In the past, now that one logical qubit consists of could be sometimes tens, sometimes hundreds of physical qubits, all working together. As you know, qubits, these particles are incredibly fragile.

They could be unstable very easily. Any observation, any sampling of it, any environmental condition causes it to become decoherent. And so it takes extraordinarily well controlled environments, and now also a lot of different physical qubits for them to work together, and for us to do error correction on these, what are called, auxiliary or syndrome qubits, for us to error correct them and infer what that logical qubit state is. There are all kinds of different types of quantum computers. Superconducting, photonic, trapped ion, stable atom, all kinds of different ways to create a quantum computer.

Well, we now realize that it’s essential for us to connect a quantum computer directly to a GPU supercomputer so that we could do the error correction, so that we could do the artificial intelligence calibration and control of the quantum computer, and so that we could do simulations collectively, working together, the right algorithms running on the GPUs, the right algorithms running on the QPUs, and the two processors, the two computers working side by side. This is the future of quantum computing. Let’s take a look. There are many ways to build a quantum computer. Each uses qubits, quantum bits, as its core building block.

But no matter the method, all qubits, whether superconducting qubits, trapped ions, neutral atoms or photons, share the same challenge. They’re fragile and extremely sensitive to noise. Today’s qubits remain stable for only a few 100 operations. But solving meaningful problems requires trillions of operations. The answer is quantum error correction.

Measuring disturbs a qubit, which destroys the information inside it. The trick is to add extra qubits and tangle so that measuring them gives us enough information to calculate where errors occurred without damaging the qubits we care about. It’s brilliant, but needs beyond state of the art conventional compute. That’s why we built NVQ Link, a new interconnect architecture that directly connects quantum processors with NVIDIA GPUs. Quantum error correction requires reading out information from qubits, calculating where errors occur, and sending data back to correct them.

NVQ Link is capable of moving terabytes of data to and from quantum hardware, the thousands of times every second needed for quantum error correction. At its heart is CUDA Q, our open platform for quantum GPU computing. Using NVQ Link and CUDA Q, researchers will be able to do more than just error correction. They will also be able to orchestrate quantum devices and AI supercomputers to run quantum GPU applications. Quantum computing won’t replace classical systems.

They will work together, fused into one accelerated quantum supercomputing platform. Wow. This is a really long stage. You know, CEOs, we don’t just sit at our desk typing. It’s this is a physically job, physical job.

And so so today, we’re announcing the NVLee NVQ link. NVQ link. And it’s made possible by two things. Of course, this interconnect that does quantum computer control and calibration, quantum error correction, as well as connects two computers, the QPU and our GPU supercomputers to do hybrid simulations. It is also completely scalable.

It doesn’t just do the error correction for today’s number of few qubits, it does error correction for tomorrow, where we’re going to essentially scale up these quantum computers from the hundreds of qubits we have today to tens of thousands of qubits, hundreds of thousands of qubits in the future. So we now have an architecture that can do control, co simulation, quantum error correction, and scale into that future. The industry support has been incredible. Between the invention of CUDA Q, remember CUDA was designed for GPU, CPU accelerated computing, Basically, both processors to do use the right tool to do the right job. Now, CUDA Q has been extended beyond CUDA so that we could support QPU and have the two processors, QPU and the GPU, work and have computation move back and forth within just a few microseconds, the essential latency to be able to cooperate with the quantum computer.

So now, CUDA Q is such an incredible breakthrough adopted by so many different developers. We are announcing today 17 different quantum computer industry companies supporting the NVQ link. And, and I’m so excited about this, eight different DOE labs, Berkeley, Brookhaven, Fermi Labs in Chicago, Lincoln Laboratory Los Alamos, Oak Ridge, Pacific Northwest, San Diego National Lab. Just about every single DOE lab has engaged us working with our ecosystem of quantum computer companies and these quantum controllers so that we could integrate quantum computing into the future of science. Well, I have one more additional announcement to make.

Today, we’re announcing that the Department of Energy is partnering with NVIDIA to build seven new AI supercomputers to advance our nation’s science. I have to have a shout out for secretary Chris Wright. He has brought so much energy to the DOE, a surge of energy, a surge of passion to make sure that America leads science again. As I mentioned, computing is the fundamental instrument of science, and we are going through several platform shifts. On the one hand, we’re going to accelerated computing.

That’s why every future supercomputer will be GPU based supercomputer. We’re going to AI, so that AI and principled solvers, principled simulation, Principled physics simulation is not going to go away, but it could be augmented, enhanced, scaled, use surrogate models, AI models, working together. We also know that principled solvers, classical computing, could be enhanced to understand the state of nature using quantum computing. We also know that in the future, we have so much signal, so much data we have to sample from the world, remote sensing is more important than ever. And these laboratories are impossible to experiment at the scale and speed we need to unless they’re robotic factories, robotic laboratories.

So all of these different technologies are coming into science at exactly the same time. Secretary Wright understands this, and he wants the DOE to take this opportunity to supercharge themselves and make sure that The United States stay at the forefront of science. I want to thank all of you for that. Thank you. Let’s talk about AI.

What is AI? Most people would say that AI is a chatbot, and it’s rightfully so. There’s no question that ChatGPT is at the forefront of what people would consider AI. However, just as you see right now, these scientific supercomputers are not going to run chatbots. They’re going to do basic science.

Science, AI, the world of AI is much, much more than a chatbot. Of course, the chatbot is extremely important, and AGI is fundamentally critical. Deep computer science, incredible computing, great breakthroughs are still essential for AGI. But beyond that, AI is a lot more. AI is in fact I’m going to describe AI in a couple of different ways.

This first way, the first way you think about AI is that it has completely reinvented the computing stack. The way we used to do software was hand coding, hand coding software running on CPUs. Today, AI is machine learning, training, data intensive programming, if you will, trained and learned by AI that runs on a GPU. In order to make that happen, the entire computing stack has changed. Notice, you don’t see Windows up here.

You don’t see CPU up here. You see a whole different a whole fundamentally different stack. Everything from the need for energy, and this is another area where our administration, President Trump, deserves enormous credit. His pro energy initiative, his recognition that this industry needs energy to grow, it needs energy to advance, and we need energy to win, his recognition of that and putting the weight of the nation behind pro energy growth completely changed the game. If this didn’t happen, we could have been in a bad situation.

And I want to thank President Trump for that. On top of energy are these GPUs. And these GPUs are connected into, built into infrastructure that I’ll show you later. On top of this infrastructure, consists of giant data centers like Easley, many times the size the size of this room, an enormous amount of energy, which then transfer transforms the energy through this new machine called GPU supercomputers to generate numbers. These numbers are called tokens.

The language, if you will, the computational unit, the vocabulary of artificial intelligence. You can tokenize almost anything. You can tokenize, of course, the English word. You can tokenize images. That’s the reason why you’re able to recognize images or generate images.

Tokenize video, tokenize three d structures. You could tokenize chemicals and proteins and genes. You could tokenize cells. Tokenize almost anything with structure, anything with information content. Once you could tokenize it, AI can learn that language and the meaning of it.

Once it learns the meaning of that language, it can translate, it can respond just like you respond, just like you interact with ChatGPT, and it could generate just as ChatGPT can generate. So all of the fundamental things that you see ChatGPT do, all you have to do is imagine, what if it was a protein? What if it was a chemical? What if it was a three d structure like a factory? What if it was a robot?

And the token was understanding behavior and tokenizing motion and action. All of those concepts are basically the same, which is the reason why AI is making such extraordinary progress. And on top of these models are applications. Transformers transformers is not a universal model, it’s an incredibly effective model, but there’s no one universal model. It’s just that AI has universal impact.

There are so many different types of models. In the last several years, we enjoyed the invention and went through the innovation breakthroughs of multimodality. There’s so many different types of models. There’s CNN models, convolutional neural network models, there’s state space models, there’s graph neural network models, multimodal models, course, all the different tokenizations and token methods that I just described. You could have models that are spatial in its understanding, optimized for spatial awareness.

You could have models that are optimized for long sequence, recognizing subtle information over a long period of time. There are so many different types of models. On top of these models, architectures. On top of these model architectures are applications. The software of the past, and this is a a profound understanding, a profound observation of artificial intelligence, that the software industry of the past was about creating tools.

Excel is a tool. Word is a tool. A web browser is a tool. The reason why I know these are tools is because you use them. The tools industry, just as screwdrivers and hammers, the tools industry is only so large.

In the case of IT tools, they could be database tools. These IT tools is about a trillion dollars or so. But AI is not a tool. AI is work. That is the profound difference.

AI is in fact workers that can actually use tools. Of the things I’m really excited about is the work that Aravind is doing at Perplexity. Perplexity, using web browsers to book vacations or do shopping, basically an AI using tools. Cursor is an AI, an gigantic AI system that we use at NVIDIA. Every single software engineer at NVIDIA uses Cursor.

It has improved our productivity tremendously. It’s basically a partner for every one of our software engineers to generate code, and it uses a tool. And the tool it uses is called Versus Code. So Cursor is an AI, agentic AI system that uses Versus Code. Well, all of these different industries, whether it’s chatbots or digital biology where we have AI assistant researchers, or what is a robotaxi?

Inside a robotaxi, of course, it’s invisible, but obviously, there’s an AI chauffeur. That chauffeur is doing work, and the tool that it uses to do that work is the car. And so, everything that we’ve made up until now, the whole world, everything that we’ve made up until now are tools, tools for us to use. For the very first time, technology is now able to do work and help us be more productive. The list of opportunities go on and on, which is the reason why AI addresses the segment of the economy that IT has never addressed.

IT is a few trillion dollars that sits underneath the tools of a $100,000,000,000,000 global economy. Now, for the first time, AI is going to engage that $100,000,000,000,000 economy and make it more productive, make it grow faster, make it larger. We have a severe shortage of labor. Having AI that augments labor is going to help us grow. Now, what’s interesting about this from a technology industry perspective also is that in addition to the fact that AI is new technology that addresses new segments of the economy, AI in itself is also a new industry.

This token, as I was explaining earlier, these numbers, after you tokenize all these different modality of information, there’s a factory that needs to produce these numbers. Unlike the computer industry and the chip industry of the past, notice, if you look at the chip industry of the past, the chip industry represents about five to 10, maybe less, 5% or so of a multi trillion dollar, few trillion dollar IT industry. And the reason for that is because it doesn’t take that much computation to use Excel. It doesn’t take that much computation to use browsers. It doesn’t take that much computation to use Word.

We do the computation. But in this new world, there needs to be a computer that understands context all the time. It can’t pre compute that because every time you use the computer for AI, every time you ask the AI to do something, the context is different. So it has to process all of that information. Environmental, example, in the case of a self driving car, it has to process the context of the car.

Context processing. What is the instruction you’re asking the AI to do? Then it’s got to go and break down the problem step by step, reason about it, and come up with a plan and execute it. Every single one at that step requires enormous number of tokens to be generated, which is the reason why we need a new type of system, and I call it an AI factory. It’s an AI factory for sure.

It’s unlike a data center of the past. It’s an AI factory because this factory produces one thing, unlike the data centers of the past that does everything, stores files for all of us, runs all kinds of different applications. You could use that data center like you can use your computer for all kinds of applications. You could use it to play a game one day. You could use it to browse the web.

You could use it, you know, to do your accounting. And so that is a computer of the past, a universal general purpose computer. The computer I’m talking about here is a factory. It runs basically one thing. It runs AI, and its purpose its purpose is designed to produce tokens that are as valuable as possible, meaning they have to be smart.

And you want to produce these tokens at incredible rates because when you ask an AI for something, you would like it to respond. And notice, during peak hours, these AIs are now responding slower and slower because it’s got a lot of work to do for a lot of people. And so you wanted to produce valuable tokens at incredible rates, and you wanted to produce it cost effectively. Every single word that I used are consistent with an AI factory, with a car factory, or any factory. It is absolutely a factory, and these factories, these factories never existed before, and inside these factories are mountains and mountains of chips, which brings to today.

What happened in the last couple years? And in fact, what happened this last year? Something fairly profound happened this year, actually. If you look, in the beginning of the year, everybody has some attitude about AI. That attitude is generally, this is going to be big, it’s going to be the future, and somehow, a few months ago, it kicked into turbocharge.

And the reason for that is several things. The first is that we, in the last couple of years, have figured out how to make AI much, much smarter. Rather than just pretraining, pretraining basically says, let’s take all of the information that humans have ever created, let’s give it to the AI to learn from. It’s essentially memorization and generalization. It’s not unlike going to school back when we were kids, the first stage of learning.

Pre training was never meant, just as preschool was never meant to be the end of education. Pre training, preschool was simply teaching you the basic skills of intelligence so that you can understand how to learn everything else Without vocabulary, without understanding of language and how to communicate, how to think, it’s impossible to learn everything else. The next is post training. Post training after pre training is teaching you skills, Skills to solve problem. Break down problems, reason about it, how to solve math problems, how to code, how to think about these problems step by step.

Use first principle reasoning. And then after that is where computation really kicks in. As you know, for many of us, you know, we went to school, and that’s, in my case, decades ago. But ever since, I’ve learned more, thought about more, and the reason for that is because we’re constantly grounding ourselves in new knowledge, we’re constantly doing research, and we’re constantly thinking. Thinking is really what intelligence is all about.

And so, now we have three fundamental technology skills. We have these three technology, pre training, which still requires enormous amount of computation. We now have post training, which uses even more computation, and now thinking puts incredible amounts of computation load on the infrastructure because it’s thinking on our behalf for every single human. So the amount of computation necessary for AI to think, inference, is really quite extraordinary. Now, I used to hear people say that inference is easy.

NVIDIA should do training. NVIDIA is going to do you know, they are really good at this, so they’re to do training. That inference was easy. How could thinking be easy? Regurgitating memorized content is easy.

Regurgitating the multiplication table is easy. Thinking is hard, which is the reason why these three scales, these three new scaling laws, which is all of it in in full steam, has put so much pressure on the amount of computation. Now, another thing has happened. From these three scaling laws, we get smarter models, And these smarter models need more compute. But when you get smarter models, you get more intelligence.

People use it. As if anything happens, I wanna be the first one out. Just kidding. I’m sure it’s fine. Probably just lunch.

My stomach. Was that me? And so so where was I? The smarter your models are, the smarter your models are, the more people use it. It’s now more grounded.

It’s able to reason. It’s able to solve problems it never learned how to solve before because it could do research. Go learn about it, come back, break it down, reason about how to solve your how to answer your question, how to solve your problem, and go off and solve it. The amount of thinking is making the models more intelligent. The more intelligent it is, the more people use it.

The more intelligent it is, the more computation is necessary. But here’s what happened. This last year, the AI industry turned a corner, meaning that the AI models are now smart enough, they’re making they’re worthy to pay for. NVIDIA pays for every license of Cursor, and we gladly do it. We gladly do it because Cursor is helping a several $100,000 employee software engineer or AI researcher be many, many times more productive.

So of course, we’d be more than happy to do that. These AI models have become good enough that they are worthy to be paid for. Cursor, Eleven Labs, Synthesia, Abridge, Open Evidence, the list goes on. Of course, OpenAI, of course, Clyde. These models are now so good that people are paying for it, and because people are paying for it and using more of it, and every time they use more of it, you need more compute, we now have two exponentials.

These two exponentials, one is the exponential compute requirement of the three scaling law, and the second exponential, the smarter it is, the more people use it, the more computing it needs. Two exponentials now putting pressure on the world’s computational resource. At exactly the time when I told you earlier that Moore’s Law has largely ended. And so the question is, what do we do? If we have these two exponential demands growing, and if don’t find a way to drive the cost down, then this positive feedback system, this circular feedback system essentially called the virtuous cycle, essential for almost any industry, essential for any platform industry.

It was essential for NVIDIA. We have now reached the virtuous cycle of CUDA. The more applications the more the more applications people create, the more valuable CUDA is. The more valuable CUDA is, the more CUDA computers are purchased. The more CUDA computers are purchased, more developers want to create applications for it.

That virtual cycle for NVIDIA has now been achieved after thirty years. We have achieved that also. Fifteen years later, we’ve achieved that for AI. AI has now reached the virtuous cycle. And so the more you use it, because the AI is smart and we pay for it, the more profit is generated.

The more profit generated, the more compute is put on the grid, the more compute is put into AI factories. The more compute the AI becomes smarter. The smarter, more people use it. The more applications use it, the more problems we can solve. This virtual cycle is now spinning.

What we need to do is drive the cost down tremendously so that one, the user experience is better. When you prompt the AI, it responds to you much faster, and two, so that we keep this virtual cycle going by driving its cost down so that it could get smarter, so that more people will use it, so that so on and so forth. That virtual cycle is now spinning. But how do we do that when Moore’s law has really reached its limit? Well, the answer is called co design.

You can’t just design chips and hope that things on top of it is going to go faster. The best you could do in designing chips is add, I don’t know, 50% more transistors every couple of years. And if you added more transistors, just, you know, we can add more transistors, and TSMC’s got a lot of transistors, incredible company. We’ll just keep adding more transistors. However, that’s all in percentages, not exponentials.

We need to compound exponentials to keep this virtual cycle going. We call it extreme co design. NVIDIA is the only company in the world today that literally starts from a blank sheet of paper and can think about new fundamental architecture, computer architecture, new chips, new systems, new software, new model architecture, and new applications all at the same time. So many of the people in this room are here because you’re different parts of that layer, that different parts of that stack in working with NVIDIA. We fundamentally re architect everything from the ground up.

And then, because AI is such a large problem, we scale it up. We created a whole computer, a computer for the first time that has scaled up into an entire rack. That’s one computer, one GPU, and then we scale it out by inventing a new AI Ethernet technology we call Spectrum x Ethernet. Everybody will say Ethernet is Ethernet. Ethernet is hardly Ethernet.

Ethernet, Spectrum x Ethernet, is designed for AI performance, and it’s the reason why it’s so successful. And even that’s not big enough. We’ll fill this entire room of AI supercomputers and GPUs. That’s still not big enough because the number of applications and the number of users for AI is continuing to grow exponentially, and we connect multiple of these data centers together, and we call that scale across. Spectrum XGS, gigascale, x Spectrum X gigascale, XGS.

By doing so, we do co design at such a such an enormous level, such an extreme level, that the performance benefits are shocking. Not 50% better each generation, not 25% better each generation, but much, much more. This is the most extreme co designed computer we’ve ever made, and quite frankly, made in modern times. Since the IBM System March, I don’t think a computer has been ground up, reinvented like this ever. This system was incredibly hard to create.

I’ll show you the benefits in just a second. But essentially, what we’ve done essentially, what we’ve done, we’ve created otherwise hey, Janine, you can come out. It’s you had to you had to meet me like halfway. Alright. So this is kinda like Captain America’s shield.

So NVLink 72 NVLink 72, if we were to create one giant chip, one giant GPU, this is what it would look like. This is the level of wafer scale processing we would have to do. It’s incredible. All of these chips are now put into one giant rack. Did I do that or did somebody else do that?

Into that one giant rack. You know, sometimes, I don’t feel like I’m up here by myself. This one giant rack makes all of these chips work together as one. It’s actually completely incredible, and I’ll show you the benefits of that. The way it looks is this.

So thanks, Janine. I I like this. Alright. Ladies and gentlemen, Janine Paul. I got it.

In the future next, I’m just gonna go like Thor. It’s like when you’re at home and and you can’t reach the remote and you just go like this, and somebody brings it to you, that’s yeah. Same idea. It never happens to me. I’m just dreaming about it.

I’m just saying. Okay. So so anyhow anyhow, we basically this is what we created in the past. This is NVLink’s NVLink eight. Now, these models are so gigantic, the way we solve it is we turn this model, this gigantic model, into a whole bunch of experts.

It’s a little bit like a team. And so these experts are good at certain types of problems, and we collect a whole bunch of experts together. And so this giant multi trillion dollar AI model has all these different experts, and we put all these different experts on the GPU. Now, this is NVLink 72. We could put all of the chips into one giant fabric, and every single expert can talk to each other.

So the master, the primary expert, could talk to all of the distributed work and all of the necessary context and prompts and bunch of data that we have to bunch of tokens that we have to send to all of the experts. The experts would whichever one of the experts are selected to solve the answer, would then go off and try to respond, and then it would go off and do that layer after layer after layer. Sometimes eight, sometimes 16, and sometimes these experts, sometimes 64, sometimes two fifty six, but the point is, there are more and more and more experts. Well, here, NVLink 72, we have 72 GPUs, and because of that, we could put four experts in one GPU. The most important thing you need to do for each GPU is to generate tokens, which is the amount of bandwidth that you have in HPM memory.

We have one GPU generating thinking for four experts versus here, because each one of the computers can only put eight GPUs, we have to put 32 experts into one GPU. So this one GPU has to think for 32 experts versus this system, each GPU only has to think for four. And because of that, the speed difference is incredible. And this just came out, this is the benchmark done by Semi Analysis. They do a really, really thorough job, and they benchmarked all of the GPUs that are benchmarkable, and it turns out it’s not that many.

If you look at the list of GPUs you could actually benchmark, it’s like 90% NVIDIA. Okay? And but, so we’re comparing ourselves to ourselves, but the second best GPU in the world is the h 200 and runs all the workload. Grace Blackwell per GPU is 10 times the performance. Now, how do you get 10 times the performance when it’s only twice the number of transistors?

Well, the answer is extreme co design. And by understanding the nature of the future of AI models, and we’re thinking across that entire stack, we can create architectures for the future. This is a big deal. It says, we can now respond a lot faster, but this is an even bigger deal. This next one.

Look at this. This says that the lowest cost tokens in the world are generated by Grace Blackwell NVLink 72, the most expensive computer. On the one hand, g b 200 is the most expensive computer. On the other hand, its token generation capability is so great that it produces it at the lowest cost, because the tokens per second divided by the total cost of ownership of Grace Blackwell is so good. It is the lowest cost way to generate tokens.

By doing so, delivering incredible performance, 10 times the performance, delivering 10 times lower cost, that virtual cycle can continue, Which then brings me to this one I just saw this literally yesterday. This is, the CSP CapEx. People are asking me about CapEx these days, and, this is a good way to look at it. In fact, the capex of the top six CSPs, and this one is Amazon, CoreWeave, Google, Meta, Microsoft and Oracle. Okay?

These CSPs together are going to invest this much in capex, and I would tell you the timing couldn’t be better. And the reason for that is now we have the Grace Blackwell NVLink 72 in all volume production, supply chain everywhere in the world is manufacturing it, so we can now deliver to all of them this new architecture so that the CAPEX invests in instruments, computers, that deliver the best TCO. Now, underneath this, there are two things that are going on. So when you look at this, it’s actually fairly extraordinary, and it’s fairly extraordinary anyhow. But what’s happening under underneath is this.

There are two platform shifts happening at the same time. One platform shift is going from general purpose computing to accelerated computing. Remember, accelerated computing, as I mentioned to you before, it does data processing, it does image processing, computer graphics, it does computation of all kinds. It runs SQL, it runs Spark, it runs you know, you ask it, you tell us what you need to have run, and I’m fairly certain we have an amazing library for you. You could be a data center trying to make masks to manufacture semiconductors.

We have a great library for you. And so underneath, irrespective of AI, the world is moving from general purpose computing to accelerated computing, irrespective of AI. And in fact, many of the CSPs already have services that have been here long ago before AI. Remember, they were invented in the era of machine learning, classical machine learning algorithms like XGBoost, algorithms like data frames that are used for recommender systems, collaborative filtering, content filtering. All of those technologies were created in the old days of general purpose computing.

Even those algorithms, even those architectures are now better with accelerated computing. And so even without AI, the world’s CSPs are going to invest into acceleration. NVIDIA’s GPU is the only GPU that can do all of that plus AI. And ASIC might be able to do AI, but it can’t do any of the others. NVIDIA could do all of that, which explains why it is so safe to just lean into NVIDIA’s architecture.

We have now reached our virtual cycle, our inflection point, and this is quite extraordinary. I have many partners in the room, and all of you are part of our supply chain, and I know how hard you guys are working. I want to thank all of you, how hard you are working. Thank you very much. Now I’m going show you why.

This is what’s going on in our company’s business. We’re seeing extraordinary growth for Grace Blackwell for all the reasons that I just mentioned. It’s driven by two exponentials. We now have visibility. I think we’re probably the first technology company in history to have visibility into half a trillion dollars of cumulative Blackwell and early ramps of Rubin through 2026.

And as you know, 2025 is not over yet, and 2026 hasn’t started. This is how much business is on the books. Half a trillion dollars worth so far. Now this is out of that, we’ve already shipped 6,000,000 of the Blackwells in the first several quarters, I guess, the first four quarters of production, three and a half quarters of production. We still have one more quarter to go for 2025, and then we have four quarters.

So the next five quarters, there’s 500,000,000 500,000,000,000, half a trillion dollars. That’s five times the growth rate of Hopper. That kind of tells you something. This is Hopper’s entire life. This doesn’t include China and and and Asia, so this is just the West.

Okay? This is just we’re excluding China. So Hopper, in its entire life, 4,000,000 GPUs. Blackwell, each one of the Blackwells has two GPUs in it in one large package. 20,000,000 GPUs of black walls in the early parts of Rubin.

Incredible growth. So I want to thank all of our supply chain partners, everybody. I know how hard you guys are working. I made a video to celebrate your work. Let’s play it.

The age of AI has begun. Blackwell is its engine, an engineering marvel. In Arizona, it starts as a blank silicon wafer. Hundreds of chip processing and ultraviolet lithography steps build up each of the 200,000,000,000 transistors, layer by layer on a 12 inches wafer. In Indiana, HBM stacks will be assembled in parallel.

HBM memory dies with ten twenty four IOs are fabricated using advanced EUV technology. Through silicon via is used in the back end to connect 12 stacks of HBM memory and base die to produce HBM. Meanwhile, the wafer is scribed into individual Blackwell die, tested and sorted, separating the good dies to move forward. The chip on wafer on substrate process attach es 32 Blackwell dies and 128 HBM stacks on a custom silicon interposer wafer. Metal interconnect traces are etched directly into it, connecting Blackwell GPUs and HBM stacks into each system and package unit, locking everything into place.

Then the assembly is baked, molded, and cured, creating the GB 300 Blackwell Ultra Superchip. In Texas, robots will work around the clock to pick and place over 10,000 components onto the Grace Blackwell PCB. In California, Connect x eight Super NICs for scale out communications and BlueField three DPUs for offloading and accelerating networking, storage and security are carefully assembled into GB 300 compute trays. The of also benefits benefits future. Of the the We’re also And And a of the of the system world.

And world’s largest And technology system And technology weighing nearly two tons. From silicon in Arizona and Indiana to systems in Texas, Blackwell and future NVIDIA AI factory generations will be built in America. Writing a new chapter in American history and industry, America’s return to making and reindustrialization, reignited by the age of AI. The age of AI has begun, made in America, made for the world. We are manufacturing in America again.

It is incredible. The first thing that president Trump asked me for is bring manufacturing back. Bring manufacturing back because it’s it’s necessary for national security. Bring manufacturing back because we want the jobs, we want that part of the economy. And nine months later, nine months later, we are now manufacturing in full production Blackwell in Arizona.

Extreme Blackwell GB 200, Grace Blackwell NVLink 72 Extreme co design gives us 10x generationally. It’s utterly incredible. Now, the part that’s really incredible is this. This is the first AI supercomputer we made. This is in 2016 when I delivered it to a startup in San Francisco, which turned out to have been OpenAI.

This was the computer. And in order to create that computer, we designed one chip. We designed one new chip. In order for us to do co design now, look at all of the chips we have to do. This is what it takes.

You’re not going to take one chip and make a computer 10 times faster. That’s not going to happen. The way to make computers 10 times faster that we can keep increasing the performance exponentially, we can keep driving costs down exponentially, is extreme co design and working on all these different chips at the same time. We now have Rubin back home. This is Rubin.

This is the Vera Rubin and and Rubin. Ladies and gentlemen, Rubin. This is this is our third generation NVLink 72 rack scale computer. Third generation. GB 200 was the first one.

All of our partners around the world, I know how hard you guys worked. It was insanely hard. It was insanely hard to do. Second generation, so much smoother. And this generation, look at this, completely caballess.

Completely caballess. And this is this is all back in the lab now. This is the next generation Rubin. While we’re shipping GB 300s, we’re preparing Rubin to be in production this time next year, maybe slightly earlier. And so every single year, we are going to come up with the most extreme co design system so that we can keep driving up performance and keep driving down the token generation cost.

Look at this. This is just an incredibly beautiful computer. Now the so this is amazing. This is 100 petaflops. I know it doesn’t mean anything.

100 petaflops. But compared to the DGX-one I delivered to OpenAI ten years ago, nine years ago, it’s 100 times the performance right here versus 100 times of that supercomputer. A 100 times, a 100 of those, let’s see, a 100 of those would be like 25 of these racks, all replaced by this one thing. One VeriRubin. Okay.

So this is the compute tray, and this is so VeriRubin Superchip. Okay? And this is the compute tray. This up right here. It’s incredibly easy to install.

Just flip these things open, shove it in. Even I could do it. Okay? And this is the VeriRubin compute tray. If you decide you wanted to add a special processor, we’ve added another processor.

It’s called its context processor because the amount of context that we give AIs are larger and larger. We wanted to read a whole bunch of PDFs before it answers a question. Wanted to read a whole bunch of archive papers, watch a whole bunch of videos. Go learn all this before you answer a question for me. All of that context processing could be added.

And so you see on the bottom, eight ConnectX9 new super NICs. You have CPXs, eight of them. You have BlueField four, this new data processor, two Vera CPUs, and four Rubin packages, or eight Rubin GPUs. All of that in this one node, completely cable less, 100% liquid cooled. And then this new processor, I won’t talk too much about it today, I don’t have enough time, but this is completely revolutionary, and the reason for that is because your AIs need to have more and more memory.

You’re interacting with it more, you wanted to remember our last conversation, everything that you’ve learned on my behalf, please don’t forget it when I come back next time. And so all of that memory is going to create this thing called kvcaching, and that kvcaching, retrieving it, you might have noticed, every time you go into your AIs these days, it takes longer and longer to refresh and retrieve all of the previous conversations, and the reason for that is we need a revolutionary new processor, and that’s called BlueFuel four. Next is the ConnectX switch excuse me, the NVLink switch, which is right here. Okay? This is the NVLink switch.

This is what makes it possible for us to connect all of the computers together, and this switch is now several times the bandwidth of the entire world’s peak Internet traffic. And so that spine is going to communicate and carry all of that data simultaneously to all of the GPUs. On top of that, this is the Spectrum X switch, and this Ethernet switch was designed so that all of the processors could talk to each other at the same time and not gum up the network. Gum up the network. That’s very technical.

Okay? So these are these three combined, and then this is the quantum switch. This is for InfiniBand. This is Ethernet. We don’t care what language you would like to use, whatever standard you like to use, we have great scale out fabrics for you, whether it’s InfiniBand or Spectrum.

Ethernet, this one uses silicon photonics and is completely co packaged options. Basically, the laser comes right up to the silicon and connects it to our chips. Okay, so this is the Spectrum X Ethernet. And so now let’s talk about thank you. Oh, this is what it looks like.

This is a rack. This is two and a half this is two tons, 1,500,000 parts, and the spine, this spine right here, carries the entire Internet traffic in one second. Same speed, moves across all of these different processors, 100% liquid cooled, all for the fastest token generation rate in the world. Okay? So that’s what a rack looks like.

Now, that’s one rack. A gigawatt data center would have, call it, let’s see, 16 racks would be 1,000 and then 500 of those. So whatever 500 times 16. And so call it 9,000 of these, 8,000 of these would be a one gigawatt data center. Okay?

And so that’s a future AI factory. Now, used as you noticed, NVIDIA started out by designing chips and then we started to design systems and we designed AI supercomputers. Now, we’re designing entire AI factories. Every single time we move out and we integrate more of the problem to solve, we come up with better solutions. We now build entire AI factories.

This AI factory is what we’re building for Vera Rubin, and we created a technology that makes it possible for all of our partners to integrate into this factory digitally. Let me show it to you. The next industrial revolution is here. And with it, a new kind of factory. AI infrastructure is an ecosystem scale challenge requiring hundreds of companies to collaborate.

NVIDIA Omniverse DSX is a blueprint for building and operating gigascale AI factories. For the first time, the building, power, and cooling are co designed with NVIDIA’s AI infrastructure stack. It starts in the Omniverse digital twin. Jacobs Engineering optimizes compute density and layout to maximize token generation according to power constraints. They aggregate SIM ready OpenUSD assets from Siemens, Schneider Electric, Trane, and Vertiv into PTC’s product life cycle management, then simulate thermals and electricals with CUDA accelerated tools from eTab and Cadence.

Once designed, NVIDIA partners like Bechtel and Vertiv deliver prefabricated modules, factory built, tested, and ready to plug in. This shrinks build time significantly, achieving faster time to revenues. When the physical AI factory comes online, the digital twin acts as an operating system. Engineers prompt AI agents from Phydro and Emerald AI, previously twin to optimize power consumption and reduce strain on both the AI factory and the grid. In total, for a one gigawatt AI factory, DSX optimizations can deliver billions of dollars in additional revenue per year.

Across Texas, Georgia, and Nevada, NVIDIA’s partners are bringing DSX to life. In Virginia, NVIDIA is building an AI factory research center using DSX to test and productize Vera Rubin from infrastructure to software. With DSX, NVIDIA partners around the world can build and bring up AI infrastructure faster than ever. Completely completely in digital. Long long before Vera Rubin exists as a real computer, we’ve been using it as a digital twin computer.

Now, long before these AI factories exist, we will use it, we will design it, we’ll plan it, we’ll optimize it, and we’ll operate it as a digital twin. And so all of our partners that are working with us, I’m incredibly happy for all of you supporting us, and Gio is here, G. Vernova is here, Schleider. I think Olivier is here, Olivier Blum is here. Siemens, incredible partners.

Roland Bush, I think he’s watching. Hi, Roland. And so anyways, really, really great partners working with us. In the beginning, we had CUDA, and we have all these different ecosystems of software partners. Now we have Omniverse DSX, and we are building AI factories, and again, we have this incredible ecosystem of partners working with us.

Let’s talk about models, open source models particularly. In the last couple years, several things have happened. One, open source models have become quite capable because of reasoning capabilities. It has become quite capable because they’re multimodality, and they’re incredibly efficient because of distillation. So all these different capabilities have become has made open source models for the very first time incredibly useful for developers.

They are now the lifeblood of startups. Lifeblood of startups in different industries because obviously, as I mentioned before, each one of the industries have its own use case, its own use case, its own data, its own use data, its own flywheels. All of that capability, that domain expertise needs to have the ability to embed into a model. Open source makes that possible. Researchers need open source.

Developers need open source. Companies around the world, we need open source. Open source models is really, really important. The United States has to lead in open source as well. We have amazing proprietary models.

We have amazing proprietary models. We need also amazing open source models. Our country depends on it, our startups depend on it, and so NVIDIA is dedicating ourselves to go do that. We are now the largest, the largest. We lead in open source contribution.

We have 23 models in leaderboards. We have all these different domains from language models to physical AI models I’m going talk about, robotics models to biology models. Each one of these models has enormous teams, and that’s one of the reasons why we build supercomputers for ourselves to enable all these models to be created. We have number one speech model, number one reasoning model, number one physical AI model. The number of downloads is really, really terrific.

We are dedicated to this, and the reason for that is because science needs it, researchers need it, startups need it, and companies need it. I’m delighted that AI startups build on NVIDIA. They do so for several reasons. One, of course, our ecosystem is rich. Our tools work great.

All of our tools work on all of our GPUs. Our GPUs are everywhere. It’s literally in every single cloud. It’s available on prem. You could build it yourself.

You could build up an enthusiast gaming PC with multiple GPUs in it, and you could download our software stack and it just works. We have the benefit of rich developers who are making this ecosystem richer and richer and richer. So I’m really pleased with all of the startups that we’re working with. I’m thankful for that. It is also the case that many of these startups are now starting to create even more ways to enjoy our GPUs.

The CoreWeaves, Nscale, Nebius, Lambda, all of these companies, Crusoe, companies are building these new GPU clouds to serve the starves, I really appreciate that. This is all possible because NVIDIA is everywhere. We integrate our libraries, all of the CUDA x libraries I talked to you about, all the open source AI models that I talked about, all of the models that I talked about. We integrated into AWS, for example, really love working with Matt. We integrated into Google Cloud, for example, really love working with Thomas.

Each one of these clouds integrate NVIDIA GPUs and our computing, our libraries, as well as our models. Love working with Satya over at Microsoft Azure. Love working with Clay at Oracle. Each one of these clouds integrate the NVIDIA stack. As a result, wherever you go, whichever cloud you use, it works incredibly.

We also integrate NVIDIA libraries into the world SaaS, so that each one of these SaaS will eventually become agentic SaaS. I love Bill McDermott’s vision for ServiceNow. There yeah. There you go. I think that might have been Bill.

Hi, Bill. And so ServiceNow, what is it? 85% of the world’s enterprise workloads workflows. SAP, 80% of the world’s commerce. Christian Klein and I are working together to integrate NVIDIA libraries, CUDA x and Nemo and Nemotron, all of our AI systems into SAP, working with Sassine over at Synopsys, accelerating the world’s CAE, CAD, EDA tools so that they could be faster and can scale, helping them create AI agents.

One of these days, I would love to hire AI agent, ASIC designers, to work with our ASIC designers, essentially the cursor of synopsis, if you will. We’re working with Anirud. Anirud, I saw him earlier today. He was part of the pregame show. Cadence, doing incredible work, accelerating their stack, creating AI agents so that we can have Cadence AI ASIC designers and system designers working with us.

Today, we’re announcing a new one. AI will supercharge productivity. AI will transform just by every industry. But AI will also supercharge cybersecurity challenges, the bad AIs. And so we need an incredible defender.

I can’t imagine a better defender than CrowdStrike. George George is here. He was yep. I saw him earlier. We are partnering with CrowdStrike to make cybersecurity speed of light, to create a system that has cybersecurity AI agents in the cloud, but also incredibly good AI agents on prem or at the edge.

This way, whenever there’s a threat, you are moments away from detecting it. We need speed and we need a fast agentic AI, super agent super smart AIs. I have a second announcement. This is the single fastest enterprise company in the world, probably the single most important enterprise stack in the world today, Palantir Ontology. Anybody from Palantir here?

I was just talking to Alex earlier. This is Palantir Ontology. They take information, they take data, they take human judgment, and they turn it into business insight. We work with Palantir to accelerate everything Palantir does so that we could do data processing at a much, much larger scale and more speed, whether it’s structured data of the past, and of course, we’ll have structured data, human recorded data, unstructured data, and process that data for our government, for national security and for enterprises around the world, process that data at speed of light and to find insight from it. This is what it’s going to look like in the future.

Palantir is going to integrate NVIDIA so that we could process at the speed of light and at extraordinary scale. Okay? NVIDIA and Palantir. Let’s talk about physical AI. Physical AI requires three computers.

Just as it takes two computers to train a language model, one that’s to train it, evaluate it, and then inference it. Okay? So that’s the large GB 200 that you see. In order to do it for physical AI, you need three computers. You need the computer to train it.

This is GB, the Grace Blackwall NVLink 72. We need a computer that does all of the simulations that I showed you earlier with Omniverse DSX. It basically is a digital twin for the robot to learn how to be a good robot and for the factory to essentially be a digital twin. That computer is the second computer, the Omniverse computer. This computer has to be incredibly good at generative AI, and it has to be good at computer graphics, sensor simulation, ray tracing, signal processing.

This computer is called the Omniverse computer. And once we train the model, simulate that AI inside a digital twin, and that digital twin could be a digital twin of a factory as long as well as a whole bunch of digital twins of robots, then you need to operate that robot, and this is the robotic computer. This one goes into a self driving car, half of it could go into a robot. Okay? Or you could actually have, you know, robots that are quite agile and quite quite fast in operations, and it might take two of these computers.

And so this is the Thor Jetson Thor robotics computer. These three computers all run CUDA, and it makes it possible for us to advance physical AI. AI that understand the physical world, understand laws of physics, causality, permanence, you know, physical AI. We have incredible partners working with us to create the physical AI of factories. We’re using it ourselves to create our factory in Texas.

Now, once we create the robotic factory, we have a bunch of robots that are inside it, and these robots also need the physical AI, applies physical AI, and works inside the digital twin. Let’s take a look at it. America is re industrializing, reshoring manufacturing across every industry. In Houston, Texas, Foxconn is building a state of the art robotic facility for manufacturing NVIDIA AI infrastructure systems. With labor shortages and skills gaps, digitalization, robotics, and physical AI are more important than ever.

The factory is born digital in Omniverse. Foxconn engineers assemble their virtual factory in a Siemens digital twin solution developed on Omniverse Technologies. Every system, mechanical, electrical, plumbing, is validated before construction. Siemens Plant Simulation runs design space exploration optimizations to identify ideal layout. When a bottleneck appears, engineers update the layout with changes managed by Siemens Teamcenter.

In Isaac Sim, the same digital twin is used to train and simulate robot AIs. In the assembly area, FANUC manipulators build GB 300 tray modules. By manual manipulators from FII and skilled AI install busbars into the trays, and AMRs shuttle the trays to the test pods. Then Foxconn uses Omniverse for large scale sensor simulation where robot AIs learn to work as a fleet. In Omniverse, Vision AI agents built on NVIDIA Metropolis and Kosmos watch the fleets of robots and workers from above to monitor operations and alert Foxconn engineers of anomalies and safety violations, or even quality issues.

And to train new employees, agents power interactive AI coaches for easy worker onboarding. The age of US reindustrialization is here with people and robots working together. That’s the the future of manufacturing, the future of factories. I wanna thank our partner Foxconn. Young Liu, the CEO is here, but all of these ecosystem partners make it possible for us to create the future of robotic factories.

The factory is essentially a robot that’s orchestrating robots to build things that are robotic. You know, this is the amount of software necessary to do this is so intense that unless you could do it inside a digital twin, to plan it, to design it, to operate it inside a digital twin, the hopes of getting this to work is nearly impossible. I’m so happy to see also that Caterpillar, my friend Joe Creed, and his 100 year old company is also incorporating digital twins in the way they manufacture. These factories will have future robotic systems, and one of the most advanced is Figure. Brett Atkok is here today.

He founded a company three and a half years ago. They’re worth almost $40,000,000,000 today. We’re working together in training the AI training the robot, simulating the robot, and of course, the robotic computer that goes into Figure. Really amazing. I had the benefit of seeing it.

It’s really quite quite extraordinary. It is very likely that humanoid robots, and, my friend Elon is also working on this, that this is likely going to be one of the largest consumer new consumer electronics markets, and surely, one of the largest industrial equipment market. Peggy Johnson and the folks at Agility are working with us on robots for warehouse automation. The folks at Johnson and Johnson working with us again, training the robot, simulating it in digital twins, and also operating the robot. These Johnson and Johnson surgical robots are even going to perform surgery that are completely noninvasive surgery at a precision the world’s never seen before.

And of course, the cutest robot ever, the cutest robot ever, the Disney robot. And this is something really close to our heart. We’re working with Disney Research on an entirely new framework and simulation platform based on revolutionary technology called Newton. And that Newton simulator makes it possible for the robot to learn how to be a good robot inside a physically aware, physically based environment. Let’s take a look at it.

Blue, ladies and gentlemen. Disney Blue. Tell me that’s not adorable. He’s not adorable. We all want one.

We all want one. Now, remember, everything you were just seeing, that is not animation. It’s not a movie. It’s a simulation. That simulation is an omniverse.

Omniverse, the digital twin. So these digital twins of factories, digital twins of warehouses, digital twins of surgical rooms, digital twins where Blue could learn how to manipulate and navigate and, you know, interact with the world, all completely done in real time. This is going to be the largest consumer electronics product line in the world. Some of them are just really working incredibly well now. This is the future of human robotics and, of course, blue.

K? Now, human robots is still in development, but meanwhile, there’s one robot that is clearly at an inflection point, and it is basically here, and that is a robot on wheels. This is a robotaxi. A robotaxi is essentially an AI chauffeur. Now, one of the things that we’re doing today, we’re announcing the NVIDIA DRIVE Hyperion.

This is a big deal. We created this architecture so that every car company in the world could create cars, vehicles, could be commercial, could be passenger, could be dedicated to robotaxi, create vehicles that are robotaxi ready. The sensor suite with surround cameras and radars and lidar make it possible for us to achieve the highest level of surround cocoon sensor perception and redundancy necessary for the highest level of safety. Hyperion Drive, Drive Hyperion, is now designed into Lucid, Mercedes Benz, my friend Ola Colenas, the folks at Stellantis, there are many other cars coming. And once you have a basic standard platform, then developers of AV systems, and there’s so many talented ones, WAV, Wabi, Aurora, Momenta, Neuro, there’s so many of them.

WeRide, there’s so many of them that can then take their AV system and run it on the standard chassis. Basically, standard chassis has now become a computing platform on wheels. And because it’s standard and the sensor suite is comprehensive, all of them could deploy their AI to it. Let’s take a quick look. Okay.

That’s the beaut that’s beautiful San Francisco. And as you could see, as you could see, Robotaxis inflection point is about to get here. And in the future, a trillion miles a year that are driven, a 100,000,000 cars made each year, there’s some 50,000,000 taxis around the world, it’s going to be augmented by a whole bunch of robo taxis. So it’s going be a very large market. To connect it and deploy it around the world, today we’re announcing a partnership with Uber.

Uber, Dara K, Dara is going to we’re working together to connect these NVIDIA DRIVE Hyperion cars into a global network. And now in the future, you’ll be able to hail up one of these cars, and the ecosystem is going to be incredibly rich, and we’ll have Hyperion or Robotaxi cars all over the world. This is going to be a new computing platform for us, and I’m expecting it to be quite successful. Okay. So this is what we talked about today.

We talked about a large large number of things. We spoke about remember, at the core of this is two or two platform transitions, from general purpose computing to accelerated computing. NVIDIA CUDA and those suite of libraries called CUDAx has enabled us to address practically every industry, and we’re at the inflection point. It is now growing, as a virtual cycle would suggest. The second inflection point is now upon us.

The second platform transition, AI from classical handwritten software to artificial intelligence. Two platform transitioning happening at the same time, which is the reason why we’re feeling such incredible growth. Quantum computing, we spoke about. Open models, we spoke about. We spoke about enterprise with CrowdStrike and Palantir accelerating their platforms.

We spoke about robotics, a new, one of the largest consumer electronics and industrial manufacturing sectors. And of course, we spoke about six gs. NVIDIA has new platforms for six gs. We call it ARC. We have a new platform for robotics cars.

We call that Hyperion. We have new platforms even for factories, two types of factories. The AI factory, we call that DSX. And then factories with AI, we call that MEGA. And so now, we’re also manufacturing in America.

Ladies and gentlemen, thank you for joining us today, and thank you for allowing me to bring Thank thank you for for allowing us to bring GTC to Washington DC. We’re gonna do it hopefully every year. And thank you all for your service and making America great again. Thank

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers
© 2007-2025 - Fusion Media Limited. All Rights Reserved.