NVIDIA at GTC 2025: AI and Accelerated Computing Innovations

Published 18/03/2025, 21:14
© Reuters

On Tuesday, 18 March 2025, NVIDIA Corporation (NASDAQ: NVDA) unveiled its strategic vision during the NVIDIA GTC 2025 Keynote. CEO Jensen Huang highlighted the company’s advancements in artificial intelligence (AI) and accelerated computing, emphasizing both opportunities and challenges. While NVIDIA showcased significant progress in AI infrastructure and innovations, the company also addressed hurdles in scaling AI technology.

Key Takeaways

  • NVIDIA introduced the Blackwell architecture, offering improved performance and energy efficiency over the previous Hopper generation.
  • The company launched NVIDIA Dynamo, an open-source operating system for AI factories.
  • NVIDIA’s roadmap includes future GPU architectures, Vera Rubin and Feynman, with a focus on continuous innovation.
  • Partnerships with major cloud service providers and enterprises aim to expand AI into edge computing, autonomous vehicles, and robotics.
  • The conference emphasized AI factories and the necessary infrastructure to support them.

AI Infrastructure and Scaling

  • AI Inflection Point: Hopper shipments from the top 4 cloud service providers indicate significant growth in AI infrastructure demand.
  • Data Center Growth: The data center build-out is expected to reach $1 trillion, driven by a shift from general-purpose to accelerated computing and GPUs.
  • Library Acceleration: CUDAX libraries are accelerating various fields, including computational lithography (CuLitho), 5G radio (Ariel), and gene sequencing (Parabricks).
  • Blackwell Production: Blackwell is in full production, with Blackwell Ultra set for release in the second half of the year.
  • Silicon Photonics: NVIDIA announced its first co-packaged silicon photonic system at 1.6 terabits per second, enhancing density and power efficiency.

AI Advancements and Applications

  • Generative AI: This technology has shifted computing from retrieval to generative models, fundamentally changing the landscape.
  • Agentic AI: Enables machines to perceive, reason, plan, and take action, enhancing problem-solving capabilities.
  • Autonomous Vehicles: General Motors selected NVIDIA to partner on building their future self-driving car fleet.
  • Robotics: NVIDIA is building a foundation for robots with Isaac Groot n one, a generalist foundation model for humanoid robots.

Performance Metrics and Comparisons

  • Token Generation: Reasoning AI generates a hundred times more tokens by breaking problems down step by step.
  • Hopper vs. Blackwell: Blackwell offers up to 25 times the performance per ISO power compared to Hopper.
  • Throughput Improvement: Blackwell NVLink 72 with Dynamo delivers 40 times the performance of Hopper in AI factory settings.
  • Cost Reduction: The Rubin architecture will significantly reduce computation costs due to improved energy efficiency.

Conclusion

For a detailed understanding of NVIDIA’s strategic direction and innovations, readers are encouraged to refer to the full transcript below.

Full transcript - NVIDIA GTC 2025 Keynote:

Jensen, CEO, NVIDIA: What an amazing year. We wanted to do this at NVIDIA. So through the magic of artificial intelligence, we’re gonna bring you to NVIDIA’s headquarters.

I think I’m bringing you to NVIDIA’s headquarters. What do you think? This is this is where we work. This is where we work. What an amazing year it was, and we have a lot of incredible things to talk about.

And I just want you to know that I’m up here without a net. There are no scripts. There’s no teleprompter, and I’ve got a lot of things to cover. So let’s get started. First of all, I wanna thank all of the sponsors, all the amazing people who are part of this conference.

Just about every single industry is represented. Health care is here, transportation, retail. Gosh, the computer industry. Everybody in the computer industry is here. And so it’s really, really terrific to see all of you, and thank you for sponsoring it.

GTC started with GeForce. It all started with GeForce. And today, I have here a GeForce 5090. And 5090, unbelievably, twenty five years later, twenty five years after we started working on GeForce, GeForce is sold out all over the world. This is the 5090, the Blackwell generation.

And comparing it to the 4090, look how it’s 30% smaller in volume. It’s 30% better at dissipating energy and incredible performance. Hard to even compare. And the reason for that is because of artificial intelligence. GeForce brought CUDA to the world.

CUDA enabled AI, and AI has now come back to revolutionize computer graphics. What you’re looking at is real time computer graphics, a % path traced for every pixel that’s rendered. Artificial intelligence predicts the other 15. Think about this for a second. For every pixel that we mathematically rendered, artificial intelligence inferred the other 15.

And it has to do so with so much precision that the image looks right and it’s temporally accurate. Meaning that from frame to frame to frame, going forward or backwards because it’s computer graphics, it has to stay temporarily stable. Incredible. Artificial intelligence has made extraordinary progress. It has only been ten years.

Now we’ve been talking about AI for a little longer than that, but AI really came into the world’s consciousness about a decade ago. Started with perception AI, computer vision, speech recognition, then generative AI. The last five years, we’ve largely focused on generative AI, teaching an AI how to translate from one modality to another another modality, text to image, image to text, text to video, amino acids to proteins, properties to chemicals, all kinds of different ways that we can use AI to generate generate content. Generative AI fundamentally changed how computing is done. From a retrieval computing model, we now have a generative computing model.

Whereas almost everything that we did in the past was about creating content in advance, storing multiple versions of it, and fetching whatever version we think is appropriate at the moment of use. Now AI understands the context, understands what we’re asking, understands the meaning of our request, and generates what it knows. If it needs, it’ll retrieve information, augments its understanding, and generate answer for us. Rather than retrieving data, it now generates answers. Fundamentally changed how computing is done.

Every single layer of computing has been transformed. The last several years, the last couple two, three years, major breakthrough happened. Fundamental advance in artificial intelligence. We call it agentic AI. Agentic AI basically means that you have an AI that has agency.

It can perceive and understand the context of the circumstance. It can reason very importantly, it can reason about how to answer or how to solve a problem, and it can plan an action. It can plan and take action. It can use tools because it now understands multimodality information. It can go to a website and look at the format of the website, words and videos, maybe even play a video, learns from what it learns from that website, understands it, and come back and use that information, use that newfound knowledge to do its job.

Agentic AI. At the foundation of Agentic AI, of course, something that’s very new, reasoning. And then, of course, the next wave is already happening. We’re gonna talk a lot about that today. Robotics, which has been enabled by physical AI, AI that understands the physical world.

It understands things like friction and inertia, cause and effect, object permanence. When someone or it doesn’t mean it has disappeared from this universe, it’s still there, just not seeable. And so that ability to understand the physical world, the three-dimensional world, is what’s gonna enable a new era of AI we called physical AI, and it’s gonna enable robotics. Each one of these phases, each one of these waves opens up new market opportunities for all of us. It brings more and new partners to GTC.

As a result, GTC is now jam packed. The only way to hold more people at GTC is we’re gonna have to grow San Jose, and and we’re working on it. We got a lot of land to work with. We gotta grow San Jose So that we can make GTC I have to just you know, as I’m standing here, I wish all of you could see what I see. And we’re we’re in the middle of a stadium.

And last year was the first year back that we did this live, and it was it was like a rock concert. And it was described GTC is was described as the Woodstock of AI. And this year, it’s described as the Super Bowl of AI. The only difference is everybody wins at this Super Bowl. Everybody’s a winner.

And so every single year, more people come because AI is able to solve more interesting problems for more industries and more companies. And this year, we’re gonna talk a lot about agentic AI and physical AI. At its core, what enables each wave and each phase of AI, three fundamental matters are involved. The first is how do you solve the data problem? And the reason why that’s important is because AI is a data driven computer science approach.

It needs data to learn from. It needs digital experience to learn from, to gain its to learn knowledge and to gain digital experience. How do you solve the data problem? The second is how do you solve the training problem without human in the loop? The reason why human in the loop is fundamentally challenging is because we only have so much time, and we would like an AI to be able to learn at superhuman rates, at super real time rates, and to be able to learn at a scale that no humans can keep up with.

And so the second question is, how do you train the model? And the third is, how do you scale? How do you create how do you find an algorithm whereby the more resource you provide, whatever the resource is, the smarter the AI becomes. The scaling law. Well, this last year, this is where almost the entire world got it wrong.

The computation requirement, the scaling law of AI is more resilient and, in fact, hyper accelerated. The amount of computation we need at this point as a result of agentic AI, as a result of reasoning, is easily a hundred times more than we thought we needed this time last year. And let’s reason about why that’s true. The first part is let’s just go from what the AI can do. Let me work backwards.

Agentic AI, as I mentioned at this foundation, is reasoning. We now have AIs that can reason, which is fundamentally about breaking a problem down step by step. Maybe it approaches a problem in a few different ways and selects the best answer. Maybe it solves the problems the same problem in a variety of ways and ensure it has the best the same answer, consistency checking. Or maybe after it’s done deriving the answer, it plugs it back into the equation, maybe a quadratic equation to confirm that in fact that’s the right answer instead of just one shot blurbing it out.

Remember, two years ago when we started working with Chatt GPT, a miracle as it was, many complicated questions and many simple questions, it simply can’t get right, and it’s understandably so. It took a one shot, whatever it learned by studying pretrained data, whatever it saw from other experiences, pre trained data, it does a one shot blurbs it out like a Simone. Now we have AIs that can reason step by step by step using a technology called chain of thought, best of end, consistency checking, a variety of different path planning, a variety of different techniques, we now have AI’s that can reason. Break a problem down reason step by step by step. Well, you could imagine as a result, the number of tokens we generate and the fundamental technology of AI is still the same.

Generate the next token. Predict the next token. It’s just that the next token now makes up step one. Then the next token after that, after it generates step one, that step one has gone into the input of the AI again as it generates step two and step three and step four. So instead of just generating one token or one word after next, it generates a sequence of words that represents a step of reasoning.

The amount of tokens that’s generated as a result is substantially higher, and I’ll show you in a second. Easily a hundred times more. Now a hundred times more, what does that mean? Well, it could generate a hundred times more tokens, and you can see that happening as I explained previously. Or the model is more complex.

It generates 10 times more tokens. And in order for us to keep the model responsive, interactive so that we don’t lose our patience waiting for it to think, We now have to compute 10 times faster. And so 10 times tokens, 10 times faster, the amount of computation we have to do is 10 a hundred times more easily. And so you’re gonna see this in the rest of the presentation. The amount of computation we have to do for inference is dramatically higher than it used to be.

Well, the question then becomes, how do we teach an AI how to do what I just described? How to execute this chain of thought? Well, one method is you have to teach the AI how to reason. And as I mentioned earlier, in training, there are two fundamental problems we have to we have to solve. Where does the data come from?

Where does the data come from? And how do we not have it be limited by human in the loop? There’s only so much data and so much human demonstration we can perform. And so this is the big breakthrough in the last couple years. Reinforcement learning, verifiable results.

Basically, reinforcement learning of an AI as it as it attacks or tries to engage solving a problem step by step. Well, we have many problems that have been solved in the history of humanity where we know the answer. We know the equation of a quadratic equation, how to solve that. We know how to solve a Pythagorean theorem. The the rules of a right triangle.

We know many, many rules of math and geometry and logic and science. We have puzzle games that we could give it. Constraints constraint constraint, type of problems like Sudoku. Those kind of problems on and on and on, we have hundreds of these problem spaces where we can generate millions of different examples and give the AI hundreds of hundreds of chances to solve it step by step by step as we use reinforcement learning to reward it as it does a better and better job. So as a result, you take hundreds of different topics, millions of different examples, hundreds of different tries, each one of the tries generating tens of thousands of tokens.

You put that all together, we’re talking about trillions and trillions of tokens in order to train that model. And now with reinforcement learning, we have the ability to generate an enormous amount of tokens. Synthetic data generation, basically using a robotic approach to teach an AI. The combination of these two things has put an enormous, enormous challenge of computing in front of the industry. And you can see that the industry is responding.

This is what I’m about to show you is hopper shipments of the top four CSPs. The the top four CSPs. They’re the the ones with the public clouds. Amazon, Azure, GCP, and OCI. The top four c top four CSPs, not the AI companies, that’s not included.

Not all the start ups, not included. Not enterprise, not included. A whole bunch of things not included. Just those four. Just to give you a sense of comparing the peak year of Hopper and the first year of Blackwell.

Okay? The peak year of Hopper and the first year of Blackwell. So you can kinda see that, in fact, AI is going through an inflection point. It has become more useful because it’s smarter. It can reason.

It is more used. You can tell it’s more used because whenever you go to chat GPT these days, the the it seems like you have to wait longer and longer and longer, which is a good thing. It says a lot of people are using it with great effect. And the amount of computation necessary to train those models and to inference those models has grown tremendously. So in just one year, and Blackwell has just started shipping, in just one year, you could see the incredible growth in AI infrastructure.

Well, that’s being reflected in computing across the board. We’re now seeing and this is the purple is the forecast of, of an of analysts, about the the next, the increase of capital expense of the world’s data centers, including CSPs and enterprise and so on. The world’s data centers, through, the end through the end of the decade, so 02/1930. I’ve said before that I expect data center build out to reach a trillion dollars, and I am fairly certain we’re gonna reach that very soon. Two dynamics is happening at the same time.

The first dynamic is that the vast majority of that growth is likely to be accelerated. Meaning, we’ve known for some time that general purpose computing has run out of course run its course and that we need a new computing approach. And the world is going through a platform shift from hand coded software running on general purpose computers to machine learning software running on accelerators and GPUs. This way of doing computation is at this point past this tipping point. And we are now seeing the inflection point happening, the inflection happening in the world’s data center build outs.

So the first thing is a transition in the way we do computing. Second is an increase in recognition that the future of software requires capital investment. Now this is a very big idea. Whereas in the past, we wrote the software and we ran it on computers. In the future, the computer’s gonna generate the tokens for the software.

And so the computer has become a generator of tokens, not a retrieval of files From retrieval based computing to generative based computing, from the old way of doing data centers to a new way of building these infrastructure, and I call them AI factories. They’re AI factories because it has one job and one job only, generating these incredible tokens that we then reconstitute into music, into words, into videos, into research, into chemicals or proteins. We reconstitute it into all kinds of information of different types. So the world is going through a transition in not just the amount of data centers that will be built, but also how it’s built. Well, everything in the data center will be accelerated, not all of its AI.

And I wanna say a few words about this. You know, this slide this slide this slide is is, genuinely my favorite. And the reason for that is because for all of you coming to GTC, all of these years, you’ve been listening to me talk about these libraries, this whole time. This this is in fact what GTC is all about, this one slide. And in fact, a long time ago, twenty years ago, this is the only only slide we had.

One library after another library after another library. You can’t just accelerate software just as we needed an AI framework in order to create AIs and we accelerate the AI frameworks, you need frameworks for physics and biology and multiphysics and, you know, all kinds of different quantum physics. You need all kinds of libraries and frameworks. We call them CUDAX libraries, acceleration frameworks for each one of these fields of science. And so this first one is incredible.

This is coup CuPy numeric. NumPy is the number one most downloaded Python library most used Python library in the world. Downloaded 400,000,000 times this last year. CuLitho is and CuPy numeric is a, zero change drop in acceleration for NumPy. So if any of you are using NumPy out there, give CuPy numeric a try.

You’re gonna love it. A CuLitho, a computational lithography library. Over the course of four years, we’ve now taken the entire process of processing lithography computational lithography, which is the second factory in a fab. There’s the factory that manufactures the wafers, and then there’s the factory that manufactures the information to manufacture the wafers. Every industry, every company that has factories will have two factories in the future.

The factory for what they build and the factory for the mathematics, the factory for the AI, factory for cars, factory for AI is for the cars, factory for smart speakers, and factories for AI for the smart speakers. And so CuLitho is our computational lithography, TSMC, Samsung, ASML, our partners, Synopsys, Mentor, incredible support all over. I think that this is now at its tipping point. In in another five years’ time, every mask, every single lithography will be processed on NVIDIA CUDA. Ariel is our library for five gs turning a GPU into a five gs radio.

Why not? Signal processing is something we do incredibly well. Once we do that, we can layer on top of it AI AI for RAN or what we call AI RAN. The next generation of of, of, radio radio networks, will be will have AI deeply inserted into it. Why is it that we’re limited by the limits of information theory?

Because there’s only so much information spectrum we can get, not if we add AI to it. CuOpt, numerical or mathematical optimization. Almost every single industry uses this when you plan, seats and, flights, inventory and customers, workers and plants, drivers and riders, so on and so forth, where we have multiple constraints multiple constraints, a whole bunch of variables, and you’re optimizing for time, profit, quality of service, usage of resource, whatever it happens to be. NVIDIA uses it for our supply chain management. CuOpt is an incredible library.

It takes what it takes what would take hours and hours and it turns it into seconds. The reason why that’s a big deal is so that we can now explore a much larger space. We announced, that we are going to open source Kuopt. The almost everybody is using either Gurobi, Gurobi or, IBM CPLEX, or FICO. We’re working with all three of them.

The industry is so excited. We’re about to accelerate the living daylights out of the industry. Parabricks for, gene sequencing and gene analysis. Monai is the world’s leading medical imaging library. Earth two, multi physics for for, predicting in very high resolution, local weather, KuQuantum and CUDA Q.

We’re gonna have our first quantum day here at GTC. We’re working with just about everybody in the ecosystem, either helping them research on quantum architectures, quantum algorithms, or in building a, classical accelerated quantum, heterogeneous architecture. And so really exciting work there. CU, Equivariance and CU Tensor for tensor contraction, quantum chemistry. Of course, this stack is world famous.

People think that there’s one piece of software called CUDA, but in fact, on top of CUDA is a whole bunch of libraries that’s integrated into all different parts of the ecosystem and software and infrastructure in order to make AI possible. I’ve got a new one here to announce today. CuDSS, are sparse solvers, really important for CAE. This is one of the biggest things that has happened in the last year. Working with Cadence and Synopsys and Ansys and Deso and, and and to well, all all of the, the systems companies, we’ve now made possible, just about every important EDA and CAE library to be accelerated.

What’s amazing is until recently, NVIDIA has been using general purpose computers running software super slowly to design accelerated computers for everybody else. And the reason for that is because we never had that software, that body of software optimized for CUDA until recently. And so now our entire industry is gonna get supercharged as we move to accelerated computing. CuDF, a data frame for structured data, we now have a drop in acceleration for Spark and drop in acceleration for Pandas. Incredible.

And then we have Warp, a library for physics that runs in Python a Python library for physics for CUDA. We have a big announcement there. I will save it in just a second. This is just a sampling of the libraries that make possible accelerated computing. It’s not just CUDA.

We’re so proud of CUDA. But if not for CUDA and the fact that we have such a large install base, none of these libraries would be useful for any of the developers who use them. For all the developers that use them, you use it because one, it’s going to give you incredible speed up. It’s going to give you incredible scale up. And two, because the installed base of CUDA is now everywhere.

It’s in every cloud. It’s in every data center. It’s available from every computer company in the world. It’s every literally everywhere. And therefore, by using one of these libraries, your software, your amazing software can reach everyone.

And so we’ve now reached the tipping point of accelerated computing. CUDA has made it possible. And all of you, this is what GTC is about, the ecosystem. All of you made this possible. And so we made a little short video for you.

Thank you. To the creators, the pioneers, the builders of the future, CUDA was made for you. Since 02/2006, ’6 million developers in over 200 countries have used CUDA and transformed computing. With over 900 CUDA x libraries and AI models, you’re accelerating science, reshaping industries, and giving machines the power to see, learn, and reason. Now NVIDIA Blackwell is 50,000 times faster than the first CUDA GPU.

These orders of magnitude gains in speed and scale are closing the gap between simulation and real time digital twins. And for you, this is still just the beginning. We can’t wait to see what you do next. I love what we do. I love even more what you do with it.

And one of the things that that most touched me, in in my thirty three years doing this, one scientist said to me, Jensen, because of the work because of your work, I can do my life’s work in my lifetime. And, boy, if that doesn’t if that doesn’t touch you, well, you gotta be a corpse. So this is all about you guys. Thank you. Alright.

So we’re gonna talk about AI. But, you know, AI started in the cloud. It started in the cloud for good reason because it turns out that AI needs infrastructure. It’s machine learning. If the science says machine learning, then you need a machine to do the science.

And so machine learning requires infrastructure, and the cloud data centers had infrastructure. They also have extraordinary computer science, extraordinary research, the perfect circumstance for AI to take off in the cloud and the CSPs. But that’s not where AI is limited to. AI will go everywhere. And we’re gonna talk about AI in a lot of different ways.

And the cloud service providers, of course, they they they like our leading edge technology. They like the fact that we have full stack because accelerated computing, as you know, as I was explaining earlier, is not about the chip. It’s not even just the chip in the library. The programming model is the chip, the programming model, and a whole bunch of software that goes on top of it. That entire stack is incredibly complex.

Each one of those layers, each one of those libraries is essentially like SQL. SQL, as you know, is called in storage computing. It was the big revolution of computation by IBM. SQL is one library, just imagine. I just showed you a whole bunch of them.

And in the case of AI, there’s a whole bunch more. So the stack is complicated. They also love the fact that CSPs love that NVIDIA CUDA developers are CSP customers because in the final analysis, they’re billing infrastructure for the world to use. And so the rich developer ecosystem is really valued and really, really, deeply appreciated. Well, now that we’re gonna take AI out to the rest of the world, the rest of the world has different system configurations, operating environment differences, domain specific library differences, usage differences.

And so AI as it translates to enterprise IT, as it translates to manufacturing, as it translates to robotics or self driving cars, or even companies that are starting GPU clouds. There’s a whole bunch of companies, maybe 20 of them, who started during the NVIDIA time. And what they do is just one thing. They host GPUs. They call themselves GPU clouds.

And one of our one of our great partners, CoreWeave, is in the process of going public, and we’re super proud of them. And so GPU clouds, they have their own requirements. But one of the areas that I’m super excited about is edge. And today, we announced we announced today that Cisco, Nvidia, T Mobile, the largest telecommunications company in the world, Cerberus ODC, are gonna build a full stack for radio networks here in The United States. And that’s gonna be the second stack so that this current stack this current stack we’re announcing today will put AI into the edge.

Remember, a hundred billion dollars of the world’s capital investments each year is in the radio networks and all of the data centers provisioning for communications. In the future, there is no question in my mind that’s going to be accelerated computing infused with AI. AI will do a far, far better job adapting the radio signals, the massive MIMOs to the changing environments and the traffic conditions. Of course, it would. Of course, we would use reinforcement learning to do that.

Of course, MIMO is essentially one giant radio robot. Of course, it is. And so we will, of course, provide for those capabilities. Of course, AI could revolutionize communications. You know, when I call home, you don’t have to say but that that few words because my wife knows where I work, what that condition’s like.

Conversation carries on from yesterday. She kinda remembers what I like, don’t like. And oftentimes, just a few words, you communicated a whole bunch. The reason for that is because of context and human priors, prior knowledge. Well, combining those capabilities could revolutionize communications.

Look what it’s doing for video processing. Look, look what I just described earlier in three d graphics. And so, of course, we’re going to do the same for Edge. So I’m super excited about the announcement that we made today. T Mobile, Cisco, NVIDIA, Cerberus, ODC are going to build a full stack.

Well, AI is gonna go into every industry. That’s just one. One of the earliest industries that AI went into was autonomous vehicles. The moment I saw AlexNet, and we’ve been working on computer vision for a long time. The moment I saw AlexNet was such an inspiring moment, such an exciting moment, it caused us to decide to go all in on building self driving cars.

So we’ve been working on self driving cars now for over a decade. We build technology that almost every single self driving car company uses. It could be either in the data center. For example, Tesla uses a video lots of NVIDIA GPUs in the data center. It could be in the data center or the car.

Waymo and Wave uses NVIDIA computers in data centers as well as the car. It could be just in the car. It’s very rare, but sometimes it’s just in the car, or they use all of our software in addition. We work with the car industry however the car industry would like us to work with them. We build all three computers, the training computer, the simulation computer, and the robotics computer, the self driving car computer, all the software stack that sits on top of it, models and algorithms, just as we do with all of the other industries that I’ve demonstrated.

And so today, I’m super excited to announce that GM has selected NVIDIA to partner with them to build their future self driving car fleet. The time for autonomous vehicles has arrived, and we’re work we’re looking forward to building with GM AI in all three areas, AI for manufacturing, so they can revolutionize the way they manufacture. AI for enterprise, so they can revolutionize the way they work. Design cars and simulate cars and and then also AI for in the car. So AI infrastructure for GM, partnering with GM and building with GM their AI.

So I’m super excited about that. One of the areas that I’m deeply proud of and it rarely gets any attention is safety. Automotive safety. It’s called halos. And our company is called halos.

Safety requires technology from silicon to systems to system software, the algorithms, the methodologies, everything from diversity to ensuring diversity, monitoring and transparency, explainability, all of these different philosophies have to be deeply ingrained into every single part of how you develop the system and the software. We’re the first company in the world, I believe, to have every line of code safety assessed. 7,000,000 lines of code safety assessed. Our chip, our system, our system software, and our algorithms are safety assessed by third parties that crawl through every line of code to ensure that it is designed to ensure diversity, transparency, and explainability. We also have filed over a thousand patents.

And during this GTC, and I really encourage you to do so, is to go spend time in the Halos workshop so that you could see all of the different things that comes together to ensure that cars of the future are going to be safe as well as autonomous. And so this is something I’m very proud of. It barely it rarely gets any attention. And so I I thought I would spend the extra time this time to talk about that. K.

NVIDIA halos. All of you have seen cars drive by themselves. The Waymo robotaxis are incredible. But we made a video to share with you some of the technology we use to solve the problems of data and training and diversity so that we could use the magic of AI to go create AI. Let’s take a look.

NVIDIA is accelerating AI development for AVs with Omniverse and Cosmos. Cosmos prediction and reasoning capabilities support AI first AV systems that are end to end trainable with new methods of development. Model distillation, closed loop training, and synthetic data generation. First, model distillation. Adapted as a policy model, Cosmos’ driving knowledge transfers from a slower, intelligent teacher to a smaller, faster student inferenced in the car.

The teacher’s policy model demonstrates the optimal trajectory followed by the student model learning through iterations until it performs at nearly the same level as the teacher. The distillation process bootstraps a policy model, but complex scenarios require further tuning. Closed loop training enables fine tuning of policy models. Log data is turned into three d scenes for driving closed loop in physics based simulation using Omniverse Neural Reconstruction. Variations of these scenes are created to test the model’s trajectory generation capabilities.

Cosmos behavior evaluator can then score the generated driving behavior to measure model performance. Newly generated scenarios and their evaluation create a large dataset for closed loop training, helping AVs navigate complex scenarios more robustly. Last, three d synthetic data generation enhances AV’s adaptability to diverse environments. From log data, Omniverse builds detailed four d driving environments by fusing maps and images and generates a digital twin of the real world, including segmentation to guide Cosmos by classifying each pixel. Cosmos then scales the training data by generating accurate and diverse scenarios, closing the sim to real gap.

Omniverse and Cosmos enable AVs to learn, adapt, and drive intelligently, advancing safer mobility. NVIDIA is the perfect company to do that. Gosh. That’s our destiny. Use AI to recreate AI.

The technology that we showed you there, is very similar to the technology that you’re enjoying, to, take you to a digital twin we call NVIDIA. Alright. Let’s talk about data centers. That’s not bad, Gaussian Splats, just in case. Gaussian Splats.

Well, let’s talk about data centers. Blackwell is in full production, and this is what it looks like. It’s an incredible, incredible you know, for for people for us, this is a side of beauty. Would you agree? This is how how is this not beautiful?

How is this not beautiful? Well, this is a big deal because we made a fundamental transition in computer architecture. I just want you to know that, in fact, I’ve shown you a version of this, about three years ago. It was called, Grace Hopper, and the system was called Ranger. The Ranger system, is about, maybe about half of the width of the screen.

And it was the world’s first NVLink 32. Three years ago, we showed Ranger working, and it was way too large, but it was exactly the right idea. We were trying to solve scale up. Distributed computing is about using a whole lot of different computers working together to solve a very large problem. But there’s no replacement for scaling up before you scale out.

Both are important, but you wanna scale up first before you scale out. Well, scaling up is incredibly hard. There is no simple answer for it. You’re not gonna scale it up. You’re not gonna scale it out like Hadoop.

Take a whole bunch of commodity, computers, hook it up into a large network and do in storage computing using Hadoop. Hadoop was a revolutionary idea as we know. It enabled hyperscale data centers to solve problems of gigantic sizes and, often using off the shelf computers. However, the problem we’re trying to solve is so complex that scaling in that way would have simply cost way too much power, way too much energy. It would have never deep learning would have never happened.

And so the thing that we had to do was scale up first. Well, this is the way we scaled up. I’m not gonna lift this. This is this is 70 pounds. This is the the the last generation system architecture is called HGX.

This revolutionized computing as we know it. This revolutionized artificial intelligence. This is eight GPUs. Eight GPUs. Each one of them is kind of like this.

Okay. This this is two GPUs, two Blackwell GPUs in one Blackwell package. Two Blackwell GPUs and one black black Blackwell package. And, there are eight of these underneath this. Okay.

And this connects into what we call NVLink eight. This then connects to a CPU shelf like that. So there’s dual CPUs and that sits on top and we connect it over PCI Express. And then many of these get connected with InfiniBand, which turns into, what is an AI supercomputer. This is the way it was in the past.

This is the way this is how we started. Well, this is as far as we scaled up before we scaled out. But we wanted to scale up even further. And I I told you that Ranger took this system and scaled it out, scaled it up by another factor of four. And so we had NVLink 32, but the system was way too large.

And so we had to do something quite remarkable, re reengineer how NVLink worked and how scale up worked. And so the first thing that we did was we said, listen, the NVLink switches are in this system embedded on the motherboard. We need we need to disaggregate the NVLink system and take it out. So this is the NVLink system. Okay.

This is an NVLink switch. This is the most this is the highest performance switch the world’s ever made. And this makes it possible for every GPU to talk to every GPU at exactly the same time at full bandwidth. Okay. So this is the NVLink switch.

We disaggregated it. We took it out and we put it in the center of the chassis. So there’s all the there are 18 of these switches in nine different racks, nine different switch trays, we call them. And then the switches are disaggregated. The compute is now sitting in here.

This is equivalent to these two things in compute. What’s amazing is this is completely liquid cooled. And by liquid cooling it, we can compress all of these compute nodes into one rack. This is the big change of the entire industry. All of you in the audience, I know how many of you are here.

I wanna thank thank you for making this fundamental shift from integrated NVLink to disaggregated NVLink, from air cooled to liquid cooled, from 60,000 components per computer or so to 600,000 components per rack, a 20 kilowatts fully liquid cooled. And as a result, we have a one exoFLOPS computer in one rack. Isn’t it incredible? So this is the compute node. This is the compute node.

Okay. And that now fits in one of these. Now we 3,000 pounds, 5,000 cables, about two miles worth. Just an incredible electronics. 600,000 parts.

I think it’s like twenty twenty cars. 20 cars worth of parts and integrates into one supercomputer. Well, our goal is to do this. Our goal is to do scale up. And this is what it now looks like.

We essentially wanted to build this chip. It’s just that no radical limits can do this. No process technology can do this. It’s a 30,000,000,000,000 transistors, 20,000,000,000,000 of it is used for computing. So it’s not like you’re you could pot you can’t reasonably build this anytime soon.

And so the way to solve this problem is to disaggregate it, as I described, into the Grace Blackwell NVLink 72 rack. But as a result, we have done the ultimate scale up. This is the most extreme scale up the world has ever done. The amount of computation that’s possible here, the memory bandwidth, 570 terabytes per second. Everything is everything in this machine is now in Ts.

Everything’s a trillion. And you have, an exaFLOPS, which is a million trillion floating point operations per second. Well, the reason why we wanted to do this is to solve an extreme problem. And that extreme problem, a lot of people misunderstood to be easy. And in fact, it is the ultimate extreme computing problem and it’s called inference.

And the reason for that is very simple. Inference is token generation by a factory, and a factory is revenue and profit generating or lack of. And so this factory has to be built with extreme efficiency, with extreme performance. Because everything about this factory directly affects your quality of service, your revenues, and your profitability. Let me show you how to read this chart because when I come back to this a few more times.

Basically, you have two axes. On the x axis is the tokens per second. Whenever you chat when you, put a prompt into chat g v t, what comes out is tokens. Those tokens are reformulated into words. You know, it’s more than a token per word.

Okay? And they’ll tokenize things like THE could be used for the, it could be used for them, it could be used for theory, it could be used for theatrics, it could be used for all kinds of okay. And so THE is a toke an example of a token. They reformulate these tokens to turn into words. Well, we’ve already established that if you want your AI to be smarter, you wanna generate a whole bunch of tokens.

Those tokens are reasoning tokens, consistency checking tokens. So coming up with a whole bunch of ideas so they can select the best of those ideas tokens. And so those tokens might they it might be second guessing itself. It might be is this the best work you could do? And so it ask it it talks to itself just like we talk to ourselves.

And so the more tokens you generate, the smarter your AI. But if you take too long to answer a question, the customer’s not gonna come back. This is no different than web search. There is a real limit to how long it can take before it comes back with a smart answer. And so you have these two dimensions that you’re fighting against.

You’re trying to generate a whole bunch of tokens, but you’re trying to do it as quickly as possible. Therefore, your token rate matters. So you want your tokens per second for that one user to be as fast as possible. However, in computer sciences and factories, there’s a fundamental tension between latency response time and throughput. And the reason is very simple.

If you’re in the large high volume business, you batch up, it’s called batching, you batch up a lot of customer demand and you manufacture a certain version of it for everybody to consume later. However, from the moment that they batched up and manufacture whatever they did to the time that you consumed it could take a long time. So no different for computer science, no different than no no different for AI factories that are generating tokens. And so you have these two fundamental tensions. On the one hand, you would like the customer’s quality of service to be as good as possible.

Smart AI’s that are super fast. On the other hand, you’re trying to get your data center to produce tokens for as many people as possible so you can maximize your revenues. The perfect answer is to the upper right. Ideally, the shape of that curve is a square that you could generate very fast tokens per person up until the limits of the factory, but no factory can do that. And so it’s probably some curve.

And your goal is to maximize the area under the curve. Okay? The product of x and y. And the further you push out, more likely it means the better of a factory that you’re building. Well, it turns out that in tokens per second for the whole factory and tokens per second response time, one of them requires enormous amount of computation, flops.

And then the other dimension requires an enormous amount of bandwidth and flops. And so this is a very difficult problem to solve. The the good answer is that you should have lots of flops and lots of bandwidth and lots of memory and lots of everything. That’s the best answer to start, which is the reason why this is such a great computer. You start with the most flops you can, the most memory you can, the most bandwidth you can, of course, the best architecture you can, the most energy efficiency you can.

And you have to have a programming model that allows you to run software across all of this insanely hard so that you could do this. Now let’s just take a look at this one demo to give you a tactile feeling of what I’m talking about. Please play it.

Unidentified speaker: Traditional LLMs capture foundational knowledge, while reasoning models help solve complex problems with thinking tokens. Here, a prompt asks to seat people around a wedding table while adhering to constraints like traditions, photogenic angles, and feuding family members. Traditional LLM answers quickly with under 500 tokens. It makes mistakes in seating the guests, while the reasoning model thinks with over 8,000 tokens to come up with the correct answer. It takes a pastor to keep the peace.

Jensen, CEO, NVIDIA: Okay. As as as all of you know as all of you know, if you have a wedding party of 300 and you’re trying to find the perfect well, the optimal seating for everyone, that’s a problem that only AI can solve or a mother-in-law can solve. And so that’s one of those problems that that co op cannot solve. Okay. So what you see here is that that, we gave it a problem that requires reasoning.

And you saw, r one goes off and it reasons about it and tries all these different scenarios and it comes back and it tests its own answer. It asked it asked itself whether it did it right. Meanwhile, the last generation language model does a one shot. So the one shot is 439 tokens. It was fast.

It was effective, but it was wrong. So it was 439 wasted tokens. On the other hand, in order for you to reason about this problem, and this is just a that that was actually a very simple problem. You know, you just give it a a few more a few more difficult variables and and it becomes very difficult to reason through. And it took 8,000, almost 9,000 tokens.

And it took a lot more computation because the model is more complex. Okay. So that’s one dimension. Before I show you some results, let me just show let me explain something else. So the answer, if you look at if you look at, Blackwell, you look at the the Blackwell system, and it’s now the scaled up NVLink 72.

The first thing that we have to do is we have to take this model. And this model is not small. It’s, you know, in the case of r one, people think r one is small, but it’s 608,000,000,000 parameters. Next generation models could be trillions of parameters. And the way that you solve that problem is you take these trillions and trillions of parameters and this model, and you distribute the workload across the whole system of GPUs.

You can use tensor parallel. You can take one layer of the model and and run it across multiple GPUs. You you could take, a slice of the pipeline and call that pipeline parallel and put that on multiple GPUs. You could take different experts and put it across different GPUs. We’ll call it expert parallel.

The con the combination of pipeline parallelism and tensor parallelism and expert parallelism, the number of combinations is insane. And depending on the model, depending on the workload, depending on the config the circumstance, how you configure that computer has to change so that you can get the maximum throughput out of it. You also sometimes optimize for very low latency. Sometimes you try to optimize for throughput. And so you have to do some in flight batching.

A lot of different techniques for batching and and, aggregating work. And so the the software, the operating system for these AI factories is insanely complicated. Well, one of the observations and this is this is a really terrific, terrific thing about having a homogeneous architecture like m v link 72, is that every single GPU could do all the things that I just described. And we observe that these reasoning models are doing a couple of phases of computing. One of the phases of computing is thinking.

When you’re thinking, you’re not producing a lot of tokens. You’re producing tokens that you’re maybe consuming yourself. You’re thinking. Maybe you’re reading. You’re digesting information.

That information could be a PDF. That information could be a website. You could literally be watching a video, ingesting all of that at super linear rates. And you take all of that information and you then formulate the answer, formulate a planned answer. And so that digestion of information, context processing is very flops intensive.

On the other hand, during the next phase is called decode. So the first part we call pre fill. The next phase of decode requires floating point operations, but it requires an enormous amount of bandwidth. And it’s fairly easy to calculate. You know, if you have a model and it’s a few trillion parameters, well, it takes a few terabytes per second.

Notice I was mentioning 576 terabytes per second. It takes terabytes per second to just pull the model in from HBM memory and to generate literally one token. And the reason it generates one token is because remember that these large language models are predicting the next token. That’s why they say the next token. It’s not predicting every single token.

It’s predicting the next token. Now we have all kinds of new techniques, speculative decoding, and all kinds of new techniques for doing that faster. But in the final analysis, you’re predicting the next token. Okay? And so you ingest, pull in the entire model and the context.

We call it a kv cache, and then we produce one token. And then we take that one token, we put it back into our brain, we produce the next token. Every single one, every single time we do that, we take trillions of parameters in, we produce one token. Trillions of parameters in, produce another token. Trillions of parameters in, produce another token.

And notice that demo, we produced 8,600 tokens. So trillions of bytes of information, trillions of bytes of information have been taken into our GPUs and produce one token at a time, which is fundamentally the reason why you want NVLink. NVLink gives us the ability to take all of those GPUs and turn them into one massive GPU. The ultimate scale up. And the second thing is that now that everything is on NVLink, I can disaggregate the pre fill from the decode, and I could decide I want to use more GPUs for pre fill, less for decode.

Because I’m thinking a lot. I’m doing it’s agentic. I’m reading a lot of information. I’m doing deep research. Notice during deep research, you know, and and, earlier I was listening to Michael and Michael was talking about his his him doing research and I do the same thing.

And we go off and we write these really long research projects for our AI. And I love doing that because, you know, I already paid for it. And I just love making our GPUs work. And nothing gives me more joy. So so so I I write up and then it goes off and it does all this research and it it went off to like 94 different websites and it read all this and I’m reading all this information and it formulates an answer and writes the report.

It’s incredible. Okay? During that entire time, prefill is super busy and it’s not really generating that many tokens. On the other hand, when you’re chatting with the chatbot and millions of us are doing the same thing, it is very token generation heavy. It’s very decode heavy.

Okay? And so, depending on the workload, we might decide to put more GPUs in the decode, depend depending on the workload, put more GPUs into pre fill. Well, this dynamic operation is really complicate complicated. So I’ve just now described pipeline pipeline parallel, tensor parallel, expert parallel, pre in flight batching, disaggregated inferencing, workload management. And then I’ve got to take this thing called a kv cache.

I’ve got to route it to the right GPU. I’ve got to manage it through all the memory hierarchies. That piece of software is insanely complicated. And so today we’re announcing the NVIDIA Dynamo. NVIDIA Dynamo does all that.

It is essentially the operating system of an AI factory. Whereas in the past, in the way that we ran data centers, our operating system would be something like VMware. And we would orchestrate, and we still do, you know, we’re a big user. We orchestrate a whole bunch of different enterprise applications running on top of our enterprise IT. But in the future, the application is not enterprise IT, it’s agents.

And the operating system is not something like VMware, it’s something like Dynamo. And this operating system is running on top of not a data center, but on top of an AI factory. Now we call it Dynamo for a good reason. As you know, the Dynamo was the first instrument that started the last industrial revolution, the industrial revolution of energy. Water comes in, electricity comes out.

It’s pretty fantastic. You know, water comes in, you light it on fire, turn to steam, and what comes out of this is invisible thing that’s incredibly valuable. It took another eighty years to go to alternate and current, but Dynamo. Dynamo is where it all started. Okay?

So we decided to call this operating system, this piece of software, insanely complicated software, the NVIDIA Dynamo. It’s open source. It’s open source. And we’re so happy that so many of our partners are working with us on it. And one of one of my favorite favorite partners, I just love them so much because the revolutionary work that they do and also because Aravind is such a great guy.

But Perplexity is a great partner of ours in working through this. Okay. So anyhow, really, really great. Okay. So now we’re gonna have to wait until we scale up all these infrastructure.

But in the meantime, we’ve done a whole bunch of very in-depth simulation. We have supercomputers doing simulation of our supercomputers, which makes sense. And and, I’m now gonna show you the benefit of everything that I’ve just said. And remember, the factory diagram on the x axis on the x axis is tokens per second throughput excuse me. On the y axis, tokens per second throughput of the factory and the x axis, tokens per second of the user experience.

And you want super smart AIs and you want to produce a whole bunch of it. This is Hopper. Okay. So this is Hopper. And it can produce it can produce for one user about for each user about a hundred tokens per second, a hundred this is eight GPUs, and it’s connected with InfiniBand.

And the, I’m normalizing it to tokens per second per megawatt. So it’s a one megawatt data center, which is not a very large AI factory, but anyhow, one megawatt. Okay? And so it can produce for each user a hundred tokens per second, and it can produce at this at this level, whatever that happens to be. A hundred thousand tokens per second for that one megawatt data center, or it can produce about two and a half million tokens per second.

Two and a half million tokens per second for that AI factory if it was super batched up and the customer is willing to wait a very long time. Okay? Does that make sense? Alright. So nod.

Alright. Because this is this is where, you know, every GTC, there’s there’s the price for entry. You guys know? And it’s like you you get tortured with math. Okay?

This is the only only only at NVIDIA do you get tortured with math. Alright. So Hopper, you get two and a half. Now what’s that 2 and a half million? What’s it what’s how do you translate that?

2 and a half million. Remember, Chad GPT is like $10 per million tokens. Right? $10 per million tokens. Let’s pretend for a second that that that’s I I I think the $10,000,010 per million tokens is probably down here.

Okay? I’ll I’ll probably say it’s down here. But let me pretend it’s up there because 2 and a half million, 10. So $25,000,000 per second. Does it make sense?

That’s that’s how you think through it. Or on the other hand, if it’s way down here, then the question is, you know, so it’s a hundred thousand hundred thousand, just divide that by 10. Okay? $250,000 per factory per second. And then as it was, it’s 31,000,000, 30 million seconds in a year, and that translates into revenues for that 1,000,000, that one megawatt data center.

And so that’s your goal. On the one hand, you would like your your token rate to be as fast as possible so that you can make really smart AIs. And if you have smart AIs, people pay you more money for it. On the other hand, the smarter the AI, the less you can make in volume. Very sensible trade off.

And this is the curve we’re trying to bend. Now what I’m just showing you right now is the fastest computer in the world, Hopper. It’s the computer that revolutionized everything. And so how do we make that better? So the first thing that we do is we come up with Blackwell with NVLink eight.

Same same Blackwell, that one same one same compute. And that one compute node with NVLink eight using FP eight. And so Blackwell is just faster. Faster, bigger, more transistors, more everything. But we like to do more than that.

And so we introduce a new precision. It’s not quite as simple as four bit floating point, but using four bit floating point, we can quantize the model, use less energy use less energy to do the same. And as a result, when you use less energy to do the same, you could do more. Because remember, one big idea is that every single data center in the future will be power limited. Your revenues are power limited.

You could figure out what your revenues are going to be based on the power you have to work with. This is no different than, you know, like many other industries. And so we are now a power limited industry. Our revenues will associate with that. Well, based on that, you want to make sure you have the most energy efficient compute architecture you can possibly get.

The next, then we scale up with NVLink 72. Does that make sense? Look at the difference between that NVLink seventy two f p four. And then because our architecture is so tightly integrated and now we add Dynamo to it, Dynamo can extend that even further. Are you following me?

So Dynamo also helps Hopper, but Dynamo helps Blackwell incredibly. Now yep. Only at GTC do you get an applause for that. And and so so now notice what I put those two shiny parts, that’s kind of where your max queue is. You know, that’s likely where you’ll run your factory operations.

You’re trying to find that balance between maximum throughput and maximum quality of AI. Smartest AI, the most of it. Those two, that x y intercept is really what you’re optimizing for. And that’s what it looks like if you look underneath those two squares. Blackwell is way, way better than Hopper.

And remember, this is not ISO chips. This is ISO power. This is ultimate Moore’s Law. This is what Moore’s Law was always about in the past. And now here we are, 25 x in one generation as ISO power.

There’s not ISO chips. There’s not ISO transistors. There’s not ISO. ISO power. The ultimate the ultimate limiter.

There’s only so much energy we can get into a data center. And so within ISO power, Blackwell’s twenty five times. Now here’s the that rainbow, that’s incredible. That’s the fun part. Look.

All the different config every underneath the Pareto, the Frontier Pareto, we call it the Frontier Pareto. Under the under the Frontier Pareto are millions of points we could have configured the data center to do. We could have paralyzed and split the work and chartered the work in a whole lot of different ways. And we found the most optimal answer, which is the Pareto, the frontier Pareto. Okay?

The Pareto frontier. And each one of them, because of the color shows you it’s a different configuration. Which is the reason why this image says very, very clearly, you want a programmable architecture that is as homogeneously fungible as fungible as possible. Because the workload changes so dramatically across the entire frontier. And look, we got on the top expert parallel eight, batch of 3,000, disaggregation off, Dynamo off.

In the middle, expert parallel 64 with with, oh, the the 26% of 26% is used for context. So so Dynamo is turned on 26% context. The other 64% is 74% is not. Batch of 64 and expert parallel of 64 on one, expert parallel four on the other. And then down here all the way to the bottom, you got you got tensor parallel 16 with expert parallel four, batch of 1% context.

The configuration of the computer is changing across that entire spectrum. And then this is what happens. So this is with input sequence length. This is a kind of a commodity test case. This is a this is a test case that you can benchmark relatively easily.

The input is 1,000 tokens. The output is 2,000. Notice earlier, we just showed you a demo where the output is very simply 9,000. Right? 8,000.

Okay? And so obviously, this is not representative of just that one chat. Now this one is more representative, and this is what, you know, the goal is to build these next generation computers for next generation workloads. And so here’s an example of a reasoning model. And in a reasoning model, Blackwell is 40 times 40 times the performance of Hopper.

Straight up. Pretty amazing. You know, I’ve said before, somebody actually asked, you know, why would I say that? But I said before that when Blackwell starts shipping in volume, you couldn’t give hoppers away. And this is what I mean.

And this makes sense. If anybody if you’re still looking to buy a hopper, don’t be afraid. It’s okay. But I’m the chief rec revenue destroyer. My sales guys are going, oh, no.

Don’t say that. There are circumstances where Hopper is fine. That’s the best thing I could say about Hopper. There are circumstances where you’re fine. Not many.

If I had to take a swing And so that’s kind of my point. When the technology is moving this fast, you you and because the workload is so intense and you’re building these things, they’re factories. You we really we really like you to to, to invest in the right the right versions. Okay. Just to put it in perspective, this is what a hundred megawatt factory looks like.

There’s a hundred megawatt factory. You have based on hoppers. You have 45,000 DAIS, 1,400 racks, and it produces 300,000,000 tokens per second. Okay? And then this is what it looks like with Blackwell.

You have 86 yeah. I know. That doesn’t make any sense. Okay. So so we’re not trying to sell you less.

Okay. Our sales guys are going, Justin, you’re selling them less. This is better. Okay? And so so anyways, the more you buy, the more you save.

It’s even better than that. Now the more you buy, the more you make. You know? And so so anyhow, remember, everything is in a context everything is now in the context of AI factories. And and although we talk about the chips, you always start from scale up.

We talk about the chips, but you always start from scale up, the full scale up. What can you scale up to the to the maximum? I want to show you now what an AI factory looks like, but AI factories are so complicated. I just gave you an example of one rack. It has 600,000 parts.

You know, it’s 3,000 pounds. Now you’ve got to take that and connect it with a whole bunch of others. And so we are starting to build what we call the digital twin of every data center. Before you build a data center, you ought to build a digital twin. Let’s take a look at this.

This is just incredibly beautiful. The world is racing to build state of the art, large scale AI factories. Bringing up an AI Gigafactory is an extraordinary feat of engineering, requiring tens of thousands of workers from suppliers, architects, contractors, and engineers to build, ship, and assemble nearly 5,000,000,000 components and over 200,000 miles of fiber, nearly the distance from the Earth to the moon. The NVIDIA Omniverse blueprint for AI factory digital twins enables us to design and optimize these AI factories long before physical construction starts. Here, NVIDIA engineers use the blueprint to plan a one gigawatt AI factory, integrating three d and layout data of the latest NVIDIA DGX SuperPods and advanced power and cooling systems from Vertiv and Schneider Electric, and optimized topology from NVIDIA Air, a framework for simulating network logic, layout, and protocols.

This work is traditionally done in silos. The Omniverse blueprint lets our engineering teams work in parallel and collaboratively, letting us explore various configurations to maximizing TCO and power usage effectiveness. NVIDIA uses Cadence Reality Digital Twin accelerated by CUDA and Omniverse libraries to simulate air and liquid cooling systems. And Schneider Electric with eTap, an application to simulate power block efficiency and reliability. Real time simulation lets us iterate and run large scale what if scenarios in seconds versus hours.

We use the digital twin to communicate instructions to the large body of teams and suppliers, reducing execution errors and accelerating time to bring up. And when planning for retrofits or upgrades, we can easily test and simulate cost and downtime, ensuring a future proof AI factory. This is the first time anybody who builds data streams, oh, that’s so beautiful. Alright. I got a race here because, turns out I got a lot to tell you.

And and so if I if I go a little too fast, it’s not because I don’t care about you. It’s just I got a lot of information to go through. Alright. So so, first, our road map. We’re at we’re now in full production of Blackwell.

Computer companies all over the world are ramping these incredible machines at scale. And, I’m just so so, pleased and and so grateful that all of you worked hard on, transitioning into this new architecture. And now, in the second half of this year, we’ll, easily transition into the upgrade. So we have the Blackwell Ultra m b link 72. You know, it’s a one and a half times more FLAVs.

It’s, you know, it’s got a a new instruction for attention. It’s one and a half times more memory. All that memory is useful for, things like k v cache. It’s, you know, two times more bandwidth. Okay?

For networking bandwidth. And so you’re gonna now that we have the same architecture, we’ll just kind of gracefully, glide into that, and, that’s called Blackwell Ultra. Okay? So that’s coming second half of this year. Now there’s a reason why we we, this is the only product announcement in any company where everybody’s come yeah.

Next. Yeah. And in fact, that’s exactly the response I was hoping to get. And and here’s why. Look, we’re building AI factories and AI infrastructure.

It’s gonna take years of planning. This isn’t this isn’t like buying a laptop. You know? This isn’t a this isn’t discretionary spend. This is spend that we have to go plan on.

And so we have to plan on having, of course, the land and the power, and and we have to get get our our CapEx ready, and we get engineering teams, and and we have to lay it out a couple two, three years in advance, which is the reason why I show you our road map a couple two, three years in advance. So that you’re we don’t surprise you in May. You know? Hi. You know?

In another month, we’re gonna go to this incredible new system. Show you an example in a second. And so we plan this out in multiple years. The next the next click, one year out, is named after an astronomer, and her, her grandkids are here. Her name is Vera Rubin.

She discovered dark matter. Okay. It’s yep. Vera Vera Rubin is incredible because the CPU is new. It’s twice the performance of Grace and more memory, more bandwidth, and yet just a little tiny 50 watt CPU is really quite incredible.

Okay? And Rubin, brand new GPU, CX nine, brand new networking SmartNIC, NVLink six, brand new NVLink, brand new memories, HBM four. Basically, everything is brand new except for the chassis. And this way, we could take a whole lot of risk in one direction and not risk a whole bunch of other things, related to the infrastructure. And so Vera Rubin, NVLink one forty four is the second half of next year.

Now one one of the things that I made a mistake on, and so I just need you to make this pivot. We’re gonna do this one time. Blackwell is really two GPUs in one Blackwell chip. We call that one chip a GPU, and that was wrong. And the reason for that is it screws up all the NVLink nomenclature and things like that.

So going forward, without going back to Blackwell to fix it, going forward, when I say NVLink one forty four, it just means that it’s connected to a 44 GPUs. And each one of those GPUs is a GPU die, and it could be assembled in some package. How it’s assembled could change from time to time. Okay? And so each GPU die is a GPU.

Each NVLink is connected to the to, to the GPU. And so Veri Rubin NVLink one forty four. And then this now sets the stage for the second half of the year, the following year, we call Ruben Ultra. Okay? So Veri Rubin Ultra.

I know. This one is where you should you go. Alright. So so this is Vera Rubin Rubin Ultra, second half of twenty seven. It’s NVLink five seventy six extreme scale up.

Each rack is 600 kilowatts, two and a half million parts. Okay? And, obviously, a whole lot of GPUs. And, everything is x factor more. So 14 times more, more FLOPS, 15 exaflops.

Instead of one exaflop, as you mentioned as I mentioned earlier, it’s now 15 exaflops, scaled up exaflops. Okay? And it’s 300 what? 4.6 petabytes. So 4,600 terabytes per second scale up bandwidth.

I don’t mean aggregate. I mean scale up bandwidth. And, of course, lots of brand new NVLink switch and CX nine. Okay? And so notice, 16 sites, four GPUs in one package, extremely large NVLink.

Now just put that in perspective. This is what it looks like. Okay? Now this is this is this is this is gonna be fun. So this you are just literally ramping up Grace Blackwell at the moment.

And I I don’t mean to make it look like a laptop, but here you go. Okay. So this is what Grace Blackwell looks like, and this is what Ruben looks like. ISO ISO dimension. And so this is an another way of saying before you scale out, you have to scale up.

Does that make sense? Before you scale up, scale out, you scale up. And then after that, you scale scale out with amazing technology that I’ll show you in just a second. Right? So first, you scale up.

And then now that gives you a sense of the pace at which we’re moving. This is the amount of scale up flops. This is scale up flops. Hopper is one x. Blackwell is 68 x.

Ruben is 900 x. Scale up flops. And then if I turn it into essentially your TCO, which is power on top, power per, and the underneath is this is the area underneath the curve that I was talking to you about. The square underneath the curve, which is basically flops times bandwidth. Okay?

So the the way you think about, very easy gut feel, gut check on whether your AI factories are making progress is watts divided by those numbers. And you can see that Ruben is gonna drive the cost down tremendously. Okay? So that’s very quickly NVIDIA’s roadmap. Once a year once a year, like like clock ticks.

Once a year. Okay. How do we scale up? Well, we introduced we were preparing to scale out. That will scale up as NVLink.

Our scale on network is InfiniBand and SpectrumX. Most were quite surprised that we came into the Ethernet world. And the reason why we decided to do Ethernet is if we could help Ethernet become like InfiniBand, have the qualities of InfiniBand, then the network itself would be a lot easier for everybody to use and manage. And so we, decided to invest in spectrum. We call it spectrum x, and we brought to it the properties of of, congestion control and and, very low latency and, and amount of software that’s part of our computing fabric.

And as a result, we made SpectrumX incredibly high performing. We scaled up the largest single GPU cluster ever as one giant cluster with SpectrumX. Right? And that was Colossus. And so there are many other examples of it.

SpectrumX is is unquestionably a huge home run for us. One of the areas that I’m very excited about is largest enterprise networking company to take Spectrum x and integrate it into their product line so that they could help the world’s enterprises become AI companies. We’re at a hundred thousand, with CX eight CX seven. Now CX eight’s coming, CX nine’s coming. And during Ruben’s time frame, we would like to scale out the number of GPUs to many hundreds of thousands.

Now the challenge with scaling up GPUs to many hundreds of thousands is the connection of the scale out on this connection on scale up is copper. We should use copper as far as we can. And that’s, you know, call it a meter or two. And that’s incredibly good connectivity, very low very high reliability, very good energy efficiency, very low cost. And so we use copper as much as we can on scale up.

But on scale out, where the data centers are now the size of the stadium, we’re gonna need something, much, long distance running. And that’s where silicon photonics comes in. The challenge of silicon photonics has been that the transceivers consume a lot of energy. To go from electrical to photonic has to go through a SerDes, go through a transceiver in a SerDes, several SerDes. And so each one of these each one of these each one of these am I am I alone?

Is everybody what happened to my, my networking guys? Can I have this up here? Yeah. Yeah. Let’s bring it up so so I can show people what I’m talking about.

Okay. So first of all, we’re announcing NVIDIA’s first co packaged option, silicon photonic system. It is the world’s first one point six terabit per second CPO. We’re gonna it is based on a technology called micro ring resonator modulator, and it is completely built with this incredible process technology at TSMC that we’ve been working with for some time. And and we partnered with just a giant ecosystem of technology providers to invent what I’m about to show you.

This is really crazy technology. Crazy, crazy technology. Now the reason why we decided to invest in MRM is so that we could prepare ourselves using MRMs incredible density and power, better density and power compared to Mach Zehnder, which is used for telecommunications when you when you, drive from one data center to another data center, in telecommunications. Or even in the transceivers that we use, we use Mach Zehnder because the density requirement is not very high until now. And so if you look at look at, these transceivers, this is an example of a transceiver.

They did a very good job tangling this up for me. Oh, wow. Thank you. Oh, mother of God. Okay.

This is where you gotta turn reasoning on. It’s not as easy as you think. These are squirrely little things. Alright. So this this one right here, this is 30 watts.

Just so keep remember, this 30 watts. And and, if you get it on if you buy in high volume, it’s a thousand dollars. This is a plug. On this side on this side is electrical. On this side is is, optical.

Okay? So optics come in through the the yellow. You plug this into a switch. It’s electrical on this side. There’s, transceivers, lasers, and it’s a technology called Mach Zehnder and, incredible.

And so we use this to go from the GPU to the switch to the next switch. And then the next switch down and the next switch down to the GPU, for example. And so each one of these, if we had a hundred thousand GPUs, we would have a hundred thousand of this side and then another, you know, hundred thousand which connects the the switch to a switch. And then on the other side, I’ll attribute that to the other to the other NIC. If we had 250,000, we’ll add another layer of switches.

And so each GPU, every GPU, 250,000, every GPU would have six transceivers. Every GPU would have six of these plugs. And these six plugs would add a 80 watts per GPU, a 80 watts per GPU, and 6,000 per GPU. Okay? And so the question is, how do we scale up now to millions of GPUs?

Because if we had a million GPUs multiplied by six, right, it would be, million 6,000,000 transceivers times 30 watts, a 80 megawatts of transceivers. They didn’t do any math. They just moved signals around. And and so the question is, how do we how could we afford? And as I mentioned earlier, energy is our most important commodity.

Everything is related ultimate to energy. So this is gonna limit our revenues, the our customers’ revenues by subtracting out a 80 megawatts of power. And so this is the this is the amazing thing that we did. We invented the world’s first MRI micromirror, and this is what it looks like. There’s a little, wave guide.

You see that on that wave guide goes to a ring. That ring resonates and it controls the amount of reflectivity of the wave guide as it goes around and limits and modulates the, energy, the the amount of light that goes through. And it shuts it off by absorbing it or pass it on. Okay. It turns the light, this direct continuous laser beam into ones and zeros.

And that’s the miracle. And that technology is then, the photonic IC is stacked with the electronic IC, which is then stacked with a whole bunch of micro lenses, which is stacked with this thing called fiber array. These things are all manufactured using this technology at TSMC called they call the COOP and, packaged using a three d COOS technology, working with all of these technology providers, a whole bunch of them, the the names I just showed you earlier, and it turns it into this incredible machine. So let’s take a look at the video. Just a technology marvel.

And they turn into these switches, our InfiniBand switch. The silicon is is, working fantastically. Second half of this year, we will ship the the, silicon photonics switch, in the second half of this year. In the second half of next year, we’ll ship the SpectrumX. Because of the MRM choice, because of the incredible technology risks that over the last five years that we did and filed hundreds of patents and we’ve licensed it to our partners so that we can all build them.

Now we’re in a position to put silicon photonics with co package options, no transceivers, fiber direct fiber in into our switches with a radix of 512. This is the this is the 512 ports. This would just simply not be possible any other way. And so this is this now set our set us up to be able to scale up to these multi hundred thousand GPUs and multimillion GPUs. And the benefit, just so you you you imagine this, it’s incredible.

In a data center, we could we could save tens of megawatts. Tens of megawatts. Let’s say 10 megawatts well, let’s let’s say 60 megawatts. 60 what? Six megawatts is 10 Rubin Ultra Racks.

Six megawatts is 10 Rubin Ultra Racks. Right? And 60, that’s a lot. A hundred Rubin Ultra Racks of power that we can now deploy into Rubens. Alright.

So this is our roadmap once a year once a year in architecture every every, two years, a new product line every single year, x factors up, and we try to take silicon risk or networking risk or system chassis risk, in in pieces so that we can move the industry forward as we pursue these incredible technology. Vera Rubin. And, I really appreciate the the, the grandkids for being here. This is our opportunity to recognize her and and to honor her for the incredible work that she did. Our next generation will be named after Feynman.

Okay. NVIDIA’s roadmap. Let me talk to you about enterprise computing. This is really important. In order for us to bring AI to the world’s enterprise, first, we have to go to a different part of NVIDIA.

The beauty of Gaussian Splats. Okay. In order in order for us to take AI to enterprise, take a step back for a second and remind yourself this. Remember, AI and machine learning has reinvented the entire computing stack. The processor is different.

The operating system is different. The applications on top are different. The way the applications are different. The way you orchestrate it are different, and the way you run them are different. Let me give you one example.

The way you access data will be fundamentally different than the past. Instead of retrieving precisely the data that you want and you read it to try to understand it, in the future, we will do what we do with perplexity. Instead of doing doing retrieval that way, I’ll just ask perplexity what I want. Ask it a question, and it will tell you the answer. This is the way enterprise IT will work in the future as well.

We’ll have AI agents, which are part of our digital workforce. There’s a billion knowledge workers in the world. They’re probably going to be 10,000,000,000 digital workers working with us side by side. A % of software engineers in the future, there are 30,000,000 of them around the world, a % of them are going to be AI assisted. I’m certain of that.

A % of NVIDIA software engineers will be AI assisted by the end of this year. And so AI agents will be everywhere. How they run, what the what enterprises run, and how we run it will be fundamentally different. And so we need a new line of computers. And you you you you you you you you you you you you you you will be fundamentally different.

And so we need a new line of computers. And this will be fundamentally different. And so we need a new line of computers, and this This is what a PC should look like. 20 petaflops. Unbelievable.

72 CPU cores, chip to chip interface, HBM memory, and just just in case, some PCI express slots for your, g force. Okay. So so this, is called DGX Station. DGX Spark and DGX Station are gonna be available by all of the OEMs. HP, Dell, Lenovo, Asus.

It’s gonna be manufactured, for data scientists and researchers all over the world. This is the computer of the age of AI. This is what computers should look like, and this is what computers will run-in the future. And we have a whole lineup for enterprise now from little tiny one to workstation ones, the server ones to, supercomputer ones, and these will be available, by all of our partners. We will also revolutionize the rest of the computing stack.

Remember, computing has three pillars. There’s computing, you’re looking at it. There’s networking, as I mentioned earlier, Spectrum x going to the world’s enterprise, an AI network. And the third is storage. Storage has to be completely reinvented.

Rather than a retrieval based storage system, it’s going to be a semantics based retrieval system, a semantics based storage system. And so the storage system has to be continuously embedding information in the background, taking raw data, embedding it into knowledge. And then later, when you access it, you don’t retrieve it. You just talk to it. You ask it questions.

You give it problems. And one of the one of the examples, I wish we had a video of it, but Aaron at Box even put one up in the cloud, worked with us to put it up in the cloud. And it’s basically, you know, a super smart storage system. And in the future, you’re gonna have something like that in every single enterprise. That is the enterprise storage of the future.

And working with the entire storage industry, really fantastic partners, DDN and Dell and HP Enterprise and Hitachi and IBM and NetApp and Nutanix and Pure Storage and VAST and Weka. Basically, the entire world storage industry will be offering this this stack. For the very first time, your storage system will be GPU accelerated. And so somebody thought I was I didn’t have enough slides. And so Michael thought I didn’t have enough slides.

So he’s he’s a gentleman. Just in case you don’t have enough slides, can I just put this in there? And so this is Michael’s slides. But but this is this he sent this to me. He goes, just in case you don’t have any slides.

And I I got too many slides. But this is such a great slide. And and let me tell you why. In one single slide, he’s explaining that Dell is going to be offering a whole line of NVIDIA enterprise IT AI infrastructure systems and and all the software that runs on top of it. Okay.

So you can see that we’re in the process of revolutionizing the world’s enterprise. We’re also announcing today this incredible model that everybody can run. And so I showed you earlier r one, a reasoning model. I showed you versus LAMA three, a non reasoning model. And obviously, r one is much smarter.

But we can do it even better than that. And we can make it possible to be enterprise ready for any company, and it’s now completely open source as part of our system we call NIMS. And you can download it. You can run it anywhere. You can run it on DGX Spark.

You can run it on DGX Station. You can run it on any of the servers that the the OEMs make. You could run it in the cloud. You can integrate into any of your agentic AI frameworks. And we’re working with companies all over the world, and I’m gonna flip through these.

So watch very carefully. I’ve got some great partners in the audience I wanna recognize Accenture. Julie Sweet and her team are building their AI factory and their AI framework. Amdocs, the world’s largest, telecommunications software company. AT and T, John Stankey and his team, building an AT and T AI system, agentic system, Larry Fink and, BlackRock team building theirs, Anirud.

In the future, not only will we hire ASIC designers, we’re gonna hire a whole bunch of digital ASIC designers from Anirud Cadence that will help us design our chips. And so Cadence is building their, AI framework. And as you can see, in every single one of them, there’s NVIDIA models, NVIDIA NIMS, and via libraries integrated throughout so that you can run it on prem in the cloud, any cloud. Capital One, one of the most advanced financial services companies in using technology, has NVIDIA all over it. Deloitte, Jason and his team, E and Y, Janet and his team, Nasdaq and Adena and her team, integrating NVIDIA technology into their AI frameworks, and then Christian and his team at SAP, Bill McDermott and his team at ServiceNow.

That was pretty good. First, this is one of those keynotes where the first slide took thirty minutes, and then all the other slide took thirty minutes. Alright. So so next, let’s go somewhere else. Let’s go talk about robotics, shall we?

Let’s talk about robots. Well, the time has come the time have has come for robots. Robots have the benefit the benefit of being able to interact with the physical world and do things that otherwise digital information cannot. We know very clearly that the world is has severe shortage of of human laborers, human workers. By the end of this decade, the world is going to be at least 50,000,000 workers short.

We’d be more than delighted to pay them each $50,000 to come to work. We’re probably gonna have to pay robots $50,000 a year to come to work. And so this is going to be a very, very large industry. There are all kinds of robotic systems. Your infrastructure will be robotic.

Billions of cameras and warehouses and factories, ten, twenty million factories around the world. Every car is already a robot as I mentioned earlier, and then now we’re building general robots. Let me show you how we’re doing that. Everything that moves will be autonomous. Physical AI will embody robots of every kind in every industry.

Three computers built by NVIDIA enable a continuous loop of robot AI simulation, training, testing, and real world experience. Training robots requires huge volumes of data. Internet scale data provides common sense and reasoning, but robots need action and control data, which is expensive to capture. With blueprints built on NVIDIA Omniverse and Cosmos, developers can generate massive amounts of diverse synthetic data for training robot policies. First, in Omniverse, developers aggregate real world sensor or demonstration data according to their different domains, robots, and tasks.

Then use Omniverse to condition Cosmos, multiplying the original captures into large volumes of photo real diverse data. Developers use Isaac Lab to post train the robot policies with the augmented dataset. And let the robots learn new skills by cloning behaviors through imitation learning or through trial and error with reinforcement learning AI feedback. Practicing in a lab is different than the real world. New policies need to be field tested.

Developers use Omniverse for software and hardware in the loop testing, simulating the policies in a digital twin with real world environmental dynamics with domain randomization, physics feedback, and high fidelity sensor simulation. Real world operations require multiple robots to work together. Mega, an omniverse blueprint, lets developers test fleets of post train policies at scale. Here, Foxconn tests heterogeneous robots in a virtual NVIDIA black belt production facility. As the robot brains execute their missions, they perceive the results of their actions through sensor simulation, then plan their next action.

Mega lets developers test many robot policies, enabling the robots to work as a system, whether for spatial reasoning, navigation, mobility, or dexterity. Amazing things are born in simulation. Today, we’re introducing NVIDIA Isaac Groot n one. Groot n one is a generalist foundation model for humanoid robots. It’s built on the foundations of synthetic data generation and learning and simulation.

Groot n one features a dual system architecture for thinking fast and slow, inspired by principles of human cognitive processing. The slow thinking system lets the robot perceive and reason about its environment and instructions, and plan the right actions to take. The fast thinking system translates the plan into precise and continuous robot actions. Groot n one’s generalization lets robots manipulate common objects with ease and execute multi step sequences collaboratively. And with this entire pipeline of synthetic data generation and robot learning, humanoid robot developers can post train Groot n one across multiple embodiments and tasks across many environments.

Around the world, in every industry, developers are using NVIDIA’s three computers to build the next generation of embodied AI. Physical AI and robotics are moving so fast. Everybody pay attention to this space. This could very well likely be the largest industry of all. At its core, we have the same challenges.

As I mentioned before, there are three that we focus on. They are rather systematic. One, how do you solve the data problem? How, where do you create the data necessary to train the AI? Two, what’s the model architecture?

And then three, what’s the scaling laws? How can we scale either the data, the compute, or both so that we can make AI smarter and smarter and smarter? How do we scale? And those two those fundamental problems exist in robotics as well. In robotics, we created a system called Omniverse.

It’s our operating system for physical AIs. You’ve heard me talk about Omniverse for a long time. We added two technologies to it. Today, I’m going to show you two things. One of them is so that we could scale AI with generative capabilities, a generative model that understand the physical world.

We call it cosmos. Using Omniverse to condition cosmos and using cosmos to generate an infinite number of environments allows us to create data that is grounded grounded, controlled by us, and yet be systematically infinite at the same time. Okay? So you just see Omniverse, we use candy colors to give you an example of us controlling the robot in the scenario perfectly, and yet, us Cosmos can create all these virtual environments. The second thing, just as we were talking about earlier, one of the incredible scaling capabilities of language models today is reinforcement learning, verifiable rewards.

The question is, what’s the verifiable rewards in robotics? And as we know very well, it’s the laws of physics. Verifiable physics rewards. And so we need an incredible physics engine. Well, most physics engines have been designed for a variety of reasons.

It could be designed because we wanna use it for large machineries or, maybe we design it for, virtual worlds, video games and such. But we need a physics engine that is designed for very fine grain, rigid and soft bodies, designed for being able to train tactile feedback and fine motor skills and actuator controls. We need it to be GPU accelerated so that we these virtual worlds could live in super linear time, super real time, and train these AI models incredibly fast. And we need it to be integrated harmoniously into a framework that is used by roboticists all over the world, MuJoCo. And so today, we’re announcing something really, really special.

It is a partnership of three companies, DeepMind, Disney Research, and NVIDIA, and we call it Newton. Let’s let’s take a look at Newton. Thank Thank you. Alright. Let’s start that over, shall we?

Let’s not ruin it for them. Hang on a second. Somebody talk to me. I need feedback. What happened?

Who I just need a human to talk to. Come on. That’s a good joke. Give me a human to talk to. Janine, I know it’s not your fault, but talk to me.

We got it. We just got a two minutes left. I’m right here. They’re reracking it. They’re reracking it?

I don’t even know what that means. Okay. Okay. Tell me that wasn’t amazing. Hey, Blue.

How are you doing? How do you like how do you like your new physics engine? You like it, Yeah. I bet. I know.

Tactile feedback, rigid body, soft body simulation, super real time. Can you imagine just now what you were looking at is com complete real time simulation? This is how we’re gonna train robots in the future. Just so you know, Bloop has, two computers, two NVIDIA computers inside. Look how smart you are.

Yes. You’re smart. Okay. Alright. Hey, Blue.

Listen. How about let’s take them home? Let’s finish this keynote. It’s lunchtime. Are you ready?

Let’s finish it up. We have another announcement. You’re good. You’re good. Just stand right here.

Stand right here. Stand right here. Alright. Good. Right there.

That’s good. Alright, Stan. Okay. We have another amazing news. I told you the progress of our robotics has been making enormous progress.

And today, we’re announcing that Groot n one is open sourced. I wanna thank all of you to come to Let’s wrap up. I wanna thank all of you for coming to GTC. We talked about several things. One, Blackwell is in full production, and the ramp is incredible.

Customer demand is incredible, and for good reason. Because there’s an inflection point in AI, the amount of computation we have to do in AI is so much greater as a result of reasoning AI and the training of reasoning AI systems and agent agentic systems. Second, Blackwell NVLink 72 with Dynamo is 40 times the performance AI factory performance of Hopper, and inference is going to be one of the most important workloads in the next decade as we scale out AI. Third, we have an annual annual rhythm of road maps that has been laid out for you so that you could plan your AI infrastructure. And then we have two we have three AI infrastructures we’re building.

AI infrastructure for the cloud, AI infrastructure for enterprise, and AI infrastructure for robots.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers
© 2007-2025 - Fusion Media Limited. All Rights Reserved.