These are top 10 stocks traded on the Robinhood UK platform in July
On Wednesday, 11 June 2025, NVIDIA Corporation (NASDAQ:NVDA) presented its strategic vision at the GTC Paris Financial Analyst Q&A. The conference highlighted NVIDIA’s ambitions in AI expansion across Europe, emphasizing the importance of sovereign AI infrastructure. While the company showcased optimism about its role in the AI revolution, challenges such as supply chain constraints and geopolitical tensions were also addressed.
Key Takeaways
- NVIDIA is focusing on AI expansion in Europe, advocating for sovereign AI infrastructure.
- Quantum-classical computing and physical AI are key areas of development.
- Supply chain constraints and geopolitical issues, such as the China export ban, pose challenges.
- NVIDIA maintains strong margins through strategic investments and cost management.
- The company is transitioning to the GB300 architecture and emphasizing NVLink technology.
Financial Results
- China revenue forecasts are adjusted to zero due to export bans, impacting a $3-4 billion annual business.
- NVIDIA maintains strong margins through a Total Cost of Ownership (TCO) value proposition and cost structure optimization.
Operational Updates
- NVIDIA is developing a quantum-classical hybrid approach, with expectations for significant progress in 2-3 years.
- Physical AI models are being designed to enable robots to generate motion from prompts, enhancing robotics and industrial automation.
- The company has visibility into several gigawatt projects, including those supported by European governments.
- The transition to the GB300 architecture is progressing smoothly, with NVIDIA confident in its supply chain management.
Future Outlook
- Sovereign AI infrastructure is anticipated to significantly impact GDP, with an estimated $1.5 trillion buildout globally.
- NVIDIA expects sustained data center revenue growth through 2026, driven by increasing AI compute demand.
- The company is focused on edge computing for self-driving cars, robotics, facilities, and base stations.
Q&A Highlights
- NVIDIA emphasizes the role of GPUs in quantum-classical systems, focusing on error correction.
- Physical AI models will be multimodal, allowing robots to reason and act based on prompts.
- NVLink technology is being made available to ASIC developers for integration with NVIDIA’s ecosystem.
- The RTX Pro server is designed to integrate AI into traditional enterprise IT environments.
In conclusion, NVIDIA’s strategic focus on AI expansion and innovation was evident at the GTC Paris conference. For a detailed understanding, readers are encouraged to refer to the full transcript.
Full transcript - GTC Paris Financial Analyst Q&A:
Unidentified speaker: So thanks for joining us. Jensen and I are here to go through any questions that you’ve had, both what you’ve seen today in our Paris GTC, but also we have done several other GTCs over the last couple months adding up to that. Probably the most important thing to understand moving here and doing so much of this here on Europe, in the EU, in France, as well as our time that we spent in Taiwan is to really emphasize that AI is here worldwide. And the importance of seeing what is going to be possible for AI, this is an area that is growing faster than any other technology in history that has reached every single region at the speed that it has done. What that’s gonna take though is the influence and help both of the sovereign nations, the sovereign help that we’re going to need from a government to really expand both here in the EU area in France to do so.
I know you saw so much of that today, with Jensen, but we’d also like to talk more, just from the investor standpoint in terms of what we’re seeing. So we’re gonna open it up for questions unless you wanna start with some Nope.
Jensen, CEO, NVIDIA: Great to see all of you. Make it nice and loud.
CJ Muse, Analyst, Kenner Fitzgerald: Good afternoon.
Jensen, CEO, NVIDIA: Hey, CJ.
Luciano, Analyst, Impacts: Great to
Jensen, CEO, NVIDIA: see pregame show. Yeah. Yeah. I understand he was quite a hit.
CJ Muse, Analyst, Kenner Fitzgerald: I don’t know. We’ll see later. Well, thank you for taking the question. Kenner Fitzgerald. Two part question.
you know, your commentary on quantum computing seemed to change a little bit. So curious, you know, where do you see commercialization And then secondly, on the sovereign front, you’ve been traveling throughout Europe. I think your travels continue beyond France. And would love to hear how your conversations have gone and how you think about the magnitude of coming investments relative to kind of what we heard from The Middle East. Thanks so much.
Jensen, CEO, NVIDIA: Yeah. Appreciate it. of all, my feelings about quantum is is is consistent with the past. However, my feelings about quantum classical is very different. And I think that that the entire industry is now recognizing that quantum classical is the way to go.
It’s not about a standalone quantum computer. It’s about a quantum computer connected to a GPU supercomputer to do all of the controls, to do the error correction, And the groundbreaking work that’s being done in error correction is really quite significant. You know, basically, if you look at look at a qubit today, a logical qubit is represented by a cluster of physical qubits. Is you guys anybody? Am I talking weird stuff?
So the these physical qubits, it takes a whole bunch of them working together and tangled together to represent a logical qubit. And then you have a bunch of ancilla qubits, which are basically shadow qubits. Because as you know, the Schrodinger’s cat problem, if you observe if you try to measure the quantum qubits the qubits, it collapses the state. It loses the it loses its coherence. So it’s either it’s no longer in that superposition state.
It’ll either be on or off. It’ll be, you know, the Schrodinger’s cat. It’ll be dead or alive. But it’s never been superpositioned. The recent breakthroughs in using error correction requires requires a lot of computing outside the super outside the quantum computer.
And we’re we’re making really great breakthroughs there. And so if you look at look at the GPU supercomputers that you’re gonna be connected to these quantum computers, they’re gonna be giant just doing the error correction stuff. If we keep going at this rate, let’s say let’s say that that we get 10 times as many logical qubits every five years. So we’ll probably have something close to 20 to a 100 logical qubits in some five years. A 100 logical qubits, just the number of the amount of state that it could represent is is sufficient to do some early biomolecular or chem you know, chemistry stuff, material work that could be quite useful.
And and the way that we’re gonna do it, and this is this is the reason why I think I think that the community is getting together on this idea, that instead of using the quantum computer to do all the simulations, what we’ll do, we’re gonna use quantum computers to generate ground truth. The electron the electron simulations of you behave like a quantum. You know? Behave be behave like an electron state. And then generate a whole bunch of synthetic data that we’ll train AI models with.
Are you guys following me? So so this this this quantum classical hybrid is gaining a lot of momentum right now. And I I think everybody’s getting excited. So we can kinda see it being kind of, you know, two or three years out doing some real work. But in the meantime, what I said is true.
Every single supercomputing center is gonna go quant quantum classical. Every a 100%. I’ve not met one that’s not gonna go quantum classical. And that’s why CUDA Q is such a revolution. We’re basically working with everybody in the quantum computing industry on CUDA Q.
And so with respect to to the build out here is much more for local use, indigenous use. Middle East was some indigenous use, but it’s really about hosting the cloud for American companies. And so it’s it’s related, not exactly the same. Does it make sense? Most of the stuff that we’re talking about here, the telcos, the end the the the regional cloud service providers, the 20 gigafactories that’s AI factories that’s gonna gonna get built that’s supported by the government across pan European countries.
That’s all being built for local consumption. And I I think that, you know, long term, it’s just gonna represent sovereign AI should represent the the the GDP of the countries. In the case of Europe, it’s been it’s taken longer to get engaged. And the reason for that is because their information technology industry is lighter than United States, but their heavy industry is much bigger than United States. That’s the reason why robotics is gonna be such a big deal here.
Industrial digital twins is gonna be a big deal here. You know, all the factories are gonna be digital. You know, AI is gonna be everywhere in those factories, and that’s the reason why the the topic here is quite different than the topics in The United States. Overall overall, the world’s regions combined, you know, we estimate over the course of next several years about a trillion and a half dollars worth of build out. And so you’re kinda you know, once you cobble up all the math, it kinda makes sense.
Joe Moore, Analyst, Morgan Stanley: Joe Moore, Morgan Stanley. You talked a lot about physical AI today. Can you talk about what we should look for to sort of see the model development? And is it is it gonna be the startups developing physical AI capabilities, new models, or are you actually seeing those physical AI is that information getting incorporated into the LLMs that are already foundational?
Jensen, CEO, NVIDIA: The physical AI models are going to be different than the LLMs. They’re gonna be multimodal. Like, for example, you’ll walk up to a robot and just tell it to do something. And, you know, just as generative AI can generate pixels just by your prompts, you should be able to generate motion from the prompts. And it’ll reason about it.
Just like generative AI right now can reason about, you know, the prompts and reason about the pixels before it generates the pixels, you can now reason about the motion before it generates the motion. And you could you could see the robot thinking, okay. I’ve been asked to put this apple in that drawer, but the drawer is not open. So I have to open a drawer. And then I gotta pick up the apple, put the apple in the drawer, close the drawer.
You guys and so you that reasoning process, you can kind of you can kind of see it happening now. Right? Because you guys see it in GPT, you know, o three or, you know, DeepSeq or you could see the the technology exists to do all that. My sense is that that the the here you know, Germany has a lot of robotics capability. France has robotics capability.
UK has I mean, there’s a because the heavy industry is quite the the Nordic Nordic countries have a lot of robotics capability. ABB, for example. Lots of robotics capability here. And they’ve just been missing the the software capability, if you will. You know?
And previous generation robots are all pre pre articulated. Do you guys understand what I’m saying? Pick this up pick this thing up from here. Put it over here. A 100% every single time.
So it’s preprogrammed. Now you don’t have to preprogram it. You just tell it to do it. And and as a result, robots will be robotics will be much more accessible to the smaller and medium sized companies.
Brett Simpson, Analyst, Arity: It’s Brett Simpson at Arity. Jensen, I just wanted to ask about gigawatts, these gigawatt projects that are being announced. It’s fairly new concept to us all here. But how many do you have line of sight looking into the next two, three years? How many projects how many gigawatt projects do you think are already underway?
I guess there’s one coming in France. It’s been announced already. But give us a sense. And in your presentation earlier, I think you said there was five European gigafactories. Is that five separate gigawatt projects?
But if you can help just give us a sense for the scale of these how many how many gigawatt projects you have.
Jensen, CEO, NVIDIA: Yeah. You got it you got it all right. We have line of sight towards to the the telcos, the regional cloud providers that I mentioned. For example, Mistral is the one here in France. In UK, it’s Enscalenebius.
In in Germany, they changed their name from iGenius. I think it’s called Combin or something like that. They but they I thought iGenius was pretty good. I’ll go find out why they changed. But these are all line of sight.
And then the 20 that are none of these are supported by government. These are all business oriented start ups or, you know, scale outs. The ones that are supported by government are the 20 AI factories, and a handful of them are Gigafactories. That’s what we have line of sight on. At this moment, there’ll probably be more.
You know, if you just kinda added out added up all of the everything that I I just said, it is lower than the GDP representation of Europe. Now, of course, for some time, the the the American cloud service providers will come in and come and serve that. Okay? So so I you know, over time over time, maybe the regional cloud providers will get larger and larger. And Because you have sovereignty issues with respect to data privacy and generally people being concerned about geopolitics these days.
So you kind of want to have infrastructure locally each one of these countries. There there’s a reason for some some of the build out.
Lou Misocha, Analyst, Daiwa Capital Markets America: Thank thank you. Lou Misocha, Daiwa Capital Markets America. Maybe if you go into what the limiting factors are both for you to produce more of your products and then from a small scale. And then from a big scale, you mentioned like that maybe some European companies don’t have the software ability. What also could just drive additional, demand for all this AI stuff that we see on these, at at your conference here, which is, pretty amazing?
Thank you.
Jensen, CEO, NVIDIA: The supply this none of the supply is horribly difficult to get now. It’s constrained, but but, you know, we’re still growing fairly fast. So so nothing is sitting around you know, we don’t have a whole bunch of Blackwells and CoasLs and these supercomputers sitting around. They’re they’re they they build what we ask them to build, and so we have to forecast it. But we’re not limited by Coas.
We’re not limited by HBM. I just have to forecast it. And and ours and our lead times are probably, you know, more than a year. From the time that I start wafers on Blackwell to the time I ship a supercomputer out the door, it’s coming up close to a year, which which is a real advantage for us. And the reason for that is because I have a better feel of the total consumption in the world than just by anybody.
And and so I could place a giant order on TSMC and and and Micron and Hynix and Samsung and Spill and Amcor and Foxconn and I mean, our supply chain’s massive. And I and we could we could place a few $100,000,000,000 order on our supply chain because we have great confidence of the end market and the fungibility of our products everywhere. You know? If we were somehow bespoke, if you will, it’s only useful for this customer, then it’s harder for us to have the confidence to build for the whole market. Our confidence NVIDIA’s everywhere.
And so we’re we’re not so much limited by any critical component per se. It’s just everything we build is not easy per se, and so we just have to forecast it. In terms of in terms of the end market, there’s several things that limit the end market. One of them is just local languages. You know, we think that everybody should speak English, but they don’t.
You know? And some people prefer interacting with with their their devices in their native language, and which which is very understandable. There’s you know, of course, if you wanna reach the whole population in Israel, and a word Israel comes to mind because I was just, you know, working on it, you you’re gonna need an an a large language model trained, you know, in the language and the data and the the customs of Hebrew. You know? And and so the same with Arabic countries and so on and so You can just multiply that out.
If we want AI to be successful in each one of these regions, the technology, the agentic technology is there, but the reasoning AI language model needs to exist. And so I was talking about that today. All of those partners of ours that’s gonna take NVIDIA’s Neemotron and optimize it for their local language, they now have a state of the art capability. They already have the data prepared for all their local languages. We’ll fine tune each and every one of them.
Each each one of them will probably take, you know, about a month of supercomputer work, But it’s not so bad. And then we take that model. Now we have to connect it into a search system. And perplexity is ideal. We just plug it right into perplexity, and off they go.
That’s the idea.
Tim Schultz, Analyst, Redburn: Hi. Thank you. It’s Tim Schultz, Amanda from Redburn. I just had a follow on actually from Joe Moore’s question about sort of model progress. So anecdotally, there’s lots of excitement, lots of enthusiasm, but also investors that I speak to who may be on the more skeptical side point to the fact that MMLU scores are topping out and maybe there’s some impatience for more tangible real world applications.
So as you work across all of the vectors at NVIDIA, could I just ask maybe from multimodal, large language sort of reasoning models, what are your sort of preferred measures of AI token capability? How how do you keep an empirical Really good question.
Jensen, CEO, NVIDIA: As you know, the reason why reasoning is such a breakthrough is because a reasoning model can solve a problem that has never seen before. Do do do you know of all, right? Makes sense. Right? Because you’re breaking it down step by step.
And each one of those steps, you know how to do. And one of those steps might be go read this document, learn it, come back, and do the next step. Okay? And so the reason why agents are so much more effective than a pretrained reasoning model is because the agent can benefit from context. Go read this document, and the document tells you exactly how to do it.
Come back and do it. MML MMLLU doesn’t do that. A open language model sitting out in free space don’t have the benefit of your fine tuning, your training. That’s the reason why enterprise models are gonna be so good. And we’re experiencing this all over the place.
Sale the work we do with with ServiceNow and SAP and Cadence, they’re all super agents. But they’re narrowly super agents. And we give them context and retrieval augmented generation. We fine tune them. We teach them human demonstration.
Does that make sense? Our goal is to design a chip. I don’t need you to be a history expert. My goal is to do supply chain management. If you don’t know anything about taxes, I’m gonna survive.
You know? Do you do you see what I’m saying? And so we take these reasoning models, these agents, and we fine tune them into the job we need them to do. That’s the reason why. Don’t worry about these AI models are gonna get better and better and better.
No doubt. Just look at the curve. It’s gonna get better. But who cares? My job is not to wait for artificial superintelligence.
I just want them to do a good job with my supply chain management.
Antoine, Analyst, New Street Research: Thank you. Antoine, New Street Research. So thank you very much for sharing the growth forecast earlier in in Europe in terms of capacity. It seems that Europe alone could be a very strong driver of growth in 2026. And so that actually got me wondering, more generally, as we get closer to the middle of twenty twenty five, how should we be thinking about growth in data center for NVIDIA next year?
Because now we have some hyperscalers who have guided already 2026 for CapEx. Broadcom last week, I think, said that they they expect sustained revenue growth into 2026. And so that means that they’re getting some visibility. Right? So so I assume you should also be be getting some.
And any comments you can make would be very helpful. Thank you.
Jensen, CEO, NVIDIA: Everything that I told you guys today is in addition to CSPs, the the American CSPs. And most of Europe is underserved today. And even the part that are served, the newest generation don’t come out. You know, there there are so many developers and researchers that are still using amperes. They don’t even have hoppers, barely.
And and so that’s the opportunity for the the local CSPs. You know, they could deploy the best as soon as possible. And don’t let it diffuse out, you know, from you don’t have to wait wait for the public clouds. So this all of this is incremental.
Luciano, Analyst, Impacts: Hi. Luciano from Impacts. So I think it’s quite clear that you guys are clearly dominating training, pre training. You made a case today why inference growth is really good for you in reasoning and so on. And on post training for the big models, just wanted to know what do you think the future for your business model is there in terms of not just providing the compute, but perhaps the simulation for these models?
A little bit like you do in robotics, If you see something similar outside robotics as well.
Jensen, CEO, NVIDIA: Yep. Post training is a is an excellent opportunity for us because post training is just a new phase of pre training. And the new phase of pre training, post training does this. So the thing you do is you give it human demonstration. We call it reinforcement learning human feedback.
Human feedback. So I give you demonstration, and you try it until and I tell you whether you did a good job or not. That’s like coaching. And then the thing is I self practice. Reinforcement learning, verifiable results.
So I give you a bunch of tests and I say, these are the answers. I give you the test, the problem, and the answers. And your job is to reproduce the results. And you just keep trying until you get it. You know what the right answer is.
If you get closer, I’ll give you a positive feedback. If you get further away, I’ll give you negative feedback. And so, that could be used for coding. The the results are very verifiable. It could be used for science simulation.
There’s a whole bunch of tools we we’ve already created as humans that are excellent at providing the feedback, the the the the ground truth. That’s called reinforcement learning verifiable results. All of that requires a ton of training. You just crank forever. You get the amount of training you can do is as much time as you have.
How much human practice can you possibly do? You can practice as much as you like. And so so the that that’s a post post training. Very big deal. Yeah.
Uses a lot of compute. Basically, as much time as you have compute, you just gotta decide when to pull the plug. You know? I gotta go to market. I can’t wait anymore.
That’s tomorrow’s the test. I gotta you’re out of time. And then for inference, as you know, right now, NVIDIA is the world’s largest inference platform. Right? Everybody say inference is easy, but there’s nothing easy about inference.
It’s the hardest thing of all. And we’re very successful in inference.
Orlando Grandi, Analyst, Itaberra: Hi. This is Orlando Grandi from Itaberra. My question will be about edge computing. So before the case was that the computing, once on the pretrained world, we moved to the edge on device, and that’s it. Right?
So you mentioned right now that post learning, reinforcement learning is getting back the processing back to the data center. But what about the use cases in robotics, in satellites? You had some announcements there. Elon is speaking about sending robots to Mars. Then the the latency becomes a big issue.
Right? You need to have that on device capability. So how that computing on device or on the edge works in this new reinforcement learning world? Thank you.
Jensen, CEO, NVIDIA: The the the compute is on device. We have four we have four edge four major edge use cases. One is self driving cars. Ours our autonomous vehicle business is already $5,000,000,000 a year. It’s a big business.
Training, simulation, and in the car, edge edge AI. The one is robotics. And that’s just starting to grow. And likely to to be quite large. The is facilities.
This is a a edge computer that sits in a factory, know, in a warehouse. Those are edge devices. And and there we partner with with Siemens a lot. And then another one that you’ve heard me talk about is base stations. Next generation base stations are gonna be based on you know, the six g base stations are gonna be based on AI.
And so we have we have a a system that’s called aerial. So these are our four, you know, primary focus areas because the software is very very hard. The easy stuff, you know, we’re gonna we’re not we’re not gonna go touch. I mean, but these four areas are quite hard. The computer’s right in the edge.
We call it Orin and Thor. Two, yeah, really amazing processors.
Jensen, CEO, NVIDIA0: Edouard Kaminiak. Piggybacking on on on this question, which I also had in mind. I mean, the the the key device is this one. Right? If we really start using, putting more and more AI on on on on the iPhones or the next iPhones to to to arrive.
How how would that affect your your your business model with your, you know, your concentration on on on GPUs?
Jensen, CEO, NVIDIA: The more AI they use on the device, the more AI you’re gonna use in the data center because you still have to train the model. You have to develop the model and verify the model, evaluate the model. And all of that’s done in the data center. Our business is not on the phone. The the phone is not our business.
And it’s plenty you know, there’s a lot of innovation there. We’re not we don’t build modems. We don’t build, you know, low power SoCs. That’s not our core business. And it’s well served anyways.
Jensen, CEO, NVIDIA0: And then on the on the supply side, your your your key risk is
Jensen, CEO, NVIDIA: naturally AI, the better. Bottom line. Yeah. The absence of AI is the only thing I worry about.
Jensen, CEO, NVIDIA0: Okay. Well Yeah. Not not much
Jensen, CEO, NVIDIA: Please AI.
Jensen, CEO, NVIDIA0: Not much to be concerned there, guess. The key supply risk, which you haven’t mentioned, Natalie, is TSMC in Taiwan. Right? I mean, I imagine you have devised emergency plans for that. Is it is it can you speak openly about about them?
Jensen, CEO, NVIDIA: We announced that we are gonna build half a trillion dollars worth of AI supercomputers in The United States in the next several years. A 100%. From chips to packaging to integration into into supercomputers. We have partners that are setting up in in United States, TSMC, SPILL, Amcor, Quanta, Wishtron, Foxconn. And so they’re all setting up in United States.
We’re the largest customer for TSMC. They’re very supportive of us. And and and so that’s the goal. Then we’ll have we’ll manufacture in multiple continents. We’ll continue to do so in Taiwan.
We’re we’re manufacture we manufacture in Samsung in in Korea, some some of our components. And then we’ll manufacture a lot more in United States. But, you know, our our supply chain is so large. We’re manufacturing really almost everywhere.
Jensen, CEO, NVIDIA0: How would you rate the calendar of reducing meaningfully your your your dependency to Taiwan?
Jensen, CEO, NVIDIA: This is it. We’re probably going as fast as anybody in the world is going. You know, I I think the the the the real truth is is Taiwan’s pretty important to the world’s supply chain. Let’s avoid conflict. Job number one.
Jensen, CEO, NVIDIA0: Of course. And and then the the the last one is how would you how would you rate Huawei’s forays in in in AI chip manufacturing? How would you rate Huawei’s Huawei. Foray into AI chip manufacturing?
Jensen, CEO, NVIDIA: Very good. Very good. They’re several years behind us, but for China, it’s fine. The reason for that is this, because their power is so cheap. Not because China is willing to accept less.
Their power is cheap. And when the power is cheap, you just use more chips. This is not like iPhone. You know? Not not not not like a phone.
In our case, the AI chips, our performance efficiency is probably four times theirs, five times theirs. But just use five times more chips. In The United States, it would never fly because the the data center is a 100 megawatts. If they have to use four times as many chips, it’s not gonna fit. It’s they don’t have enough power.
And so that that data center would be 1 the revenues, and that would never fly. But just build more data centers, use more power. And so that’s why, you know, our our advice our advice is that that the export control be lifted so that we can go and compete for that business. But right now, as we speak, I just want everybody to know that we have taken China out of our forecast. We’re assuming zero because at the moment, we’ve been banned.
We went from, you know, $3,040,000,000,000 dollars a year business to zero. It’s a it’s a big drop. And, you know, thank goodness our demand is so strong everywhere that that we’re gonna continue to grow anyhow. But, nonetheless, it’s a big loss. Okay?
The important thing is we’re not guessing about China. Are you guys following me? The only thing we’re it’s zero. And if if, in some circumstance, the president negotiates negotiates, some outcome, that that makes sense to them, it would be a bonus to us. But at the moment, we are assuming zero.
Okay? Please assume zero. No guessing. When you’re at zero, you don’t have to guess, you guys.
Jensen, CEO, NVIDIA1: Hi, Arthav from PCGM.
Jensen, CEO, NVIDIA: You have to yell at me.
Jensen, CEO, NVIDIA1: Sorry.
Jensen, CEO, NVIDIA: I will not be offended.
Jensen, CEO, NVIDIA1: Apologies. Arthav Nadari from PCGM. I had a question on the reinforcement learning that you talked about. You mentioned cases that are obvious like math or coding, you can compare it. But what about basically all the other cases where you don’t have a good sense of what the outcome needs to be.
So where the solution is a bit unknown, it’s
Luciano, Analyst, Impacts: a bit
Jensen, CEO, NVIDIA1: hybrid. What do you see there in terms of reinforcement learning applied to those type of yeah, learning learning models?
Jensen, CEO, NVIDIA: Reinforcement learning is really, really good at learning how to do something that is very, very far away, many long time away from the action. I have to take one step, another step, another step, another step, another step. Eventually, I get a positive or negative response. Reinforcement learning is good at that, in fact. And it’s the reason why reinforcement learning is good at robotics.
You you say, robot, I want you to walk from here to there. And and there you only have two goals. You have to get your head as high up as you can, and you have to move in that direction as much as you can without falling. And so that’s the pause in order to do that, many joints has to happen. Steps you know, all these different joints have, and there are many different motions that has to happen in order to get the head up.
Reinforcement learning is very good at these long feedbacks. Yeah. Yeah. That’s right. It that’s that’s reinforcement learning is good at that.
Yeah. The reward function is very far out.
CJ Muse, Analyst, Kenner Fitzgerald: Thank you again. CJ Muse with Cantor. Thanks for the follow-up. I was hoping you could speak to GB three hundred transition. I think on the last earnings call, talked about initial output and low volumes in the July and ramp thereafter.
I wonder if there’s any more specificity there. And just to follow on your comment earlier about the twelve month lead time from wafer to full output, I guess, how does that inform your customers lining up 200 versus 300? And perhaps, you know, is your visibility even longer than twelve months?
Jensen, CEO, NVIDIA2: Thank you.
Jensen, CEO, NVIDIA: Yeah. Tsuj, I appreciate it. We forecasted the GB 300 transition last year, and it’s showing up at the same time as we forecasted it. As you know, GB 200 was late because we had a bug in Blackwell. But b 300 did not have the same bug.
And so b 300 shows up at the same time. The window for b 200 to 300 is shorter because of that. But we were planning for this transition at this time for a year ago now. And so the transition is going just great because it’s basically the same chassis. Going from HGX to this NVLink chassis was a huge difference.
It everything was different. The mechanical process, the mechanical systems, all the electronics are all different. Testers are different. The way you the even the companies that tested it are different. And the way we tested it are different.
We used to integrate these computers at the data center. We send the computer nodes, each one of the h g x and the c p u trace. We send it to the data center. They integrate it at the data center and they test it at the data center. Today, that entire thing is tested at an ODM.
Fully tested. Fully integrated. And we ship a supercomputer out the door. It’s incredible. And so even the power, the the amount of power necessary in the manufacturing floor went from a few megawatts to tens of megawatts.
Because they’re basically building and testing AI supercomputers. And so everything changed. But GB 300, everything is exactly the same. We we we decided months ago that we would even not change the packaging that sits on the motherboard. Just so that everything remains the same.
And that was a good decision. Yeah. We’re in great shape in GB 300.
Joe Moore, Analyst, Morgan Stanley: It’s Joe Morgan. Could you talk to NVLink Fusion and the potential opportunity there? And I guess one of the more frequently asked questions I get is, are you making ASICs better with that product? And therefore, could it have any impact on the processor business?
Jensen, CEO, NVIDIA: Yeah. of all, a lot of ASICs are started. Most of them are canceled. And the reason for that is is what’s the point of building an ASIC if it’s not gonna be better than the one you can buy in some very specific measure? And we’re moving so fast, and the bar that we’re we’re raising is so so incredible.
It’s not easy. It really isn’t easy to build. You know? If it was easy to build a black wall and say, hey. You know?
I got 14 guys here. Let’s go build a black wall. You know? If it was that easy, well, gosh. I don’t know why I’m working so hard.
You know? Doing this for thirty three years, and it seems harder than ever. And then somebody goes, yeah. I’ll do a I’ll do an ASIC. And so so so I’m delighted to hear everybody’s interested in building ASICs.
Alright? I I do believe most of them are gonna get canceled. However, many of them have approached us about using NVLink. And they’re important people to me. You know, the person who’s asking me is not a stranger.
You guys know that. This person asking me is somebody who had said, hey, Jensen. Listen. We got a whole bunch of your NVLink systems. We have a whole bunch of your chassis.
We standardize on everything here. If we had NVLink, we could and we could put our CPU in it, say, we could just use the same chassis and extend it, and we’ll buy everything else from you. We’ll buy everything else from you. And you you know that last part? We’ll buy we’ll buy from you.
You got me. You buy from me. You kidding me? Of course. The person who’s asking me needs help.
NVLink is a good thing. NVLink Spectrum X connects into an ecosystem that I care about. You know? I care about DOCA as much as I care about CUDA. I care a lot about Nixle as much as I care about Nickel.
These are all APIs of NVIDIA that are really important. They don’t all run on GPUs. And so I I care about all of my ecosystems. And and and like like I said, you know, it it excites me when you say, I’ll buy all of it’s a super clever strategy. It’s it’s just not easy to do.
So we had to go start a team to go build this thing called NVLink chiplets. And then we signed up a whole bunch of partners to help us integrate the NVLink into the the customers and the partners. And then and then we even took our IP, and and we made it available to Synopsys and Kain so they could distribute it on our behalf. And so we’re gonna we’re gonna turn this all this whole thing into a a nice ecosystem. I think it’s gonna work out great.
NVLink is is, as you know, you know, revolutionary. It’s really hard.
Jensen, CEO, NVIDIA3: Hi, Eric Balossier. Thank you very much.
Jensen, CEO, NVIDIA: I think somebody said it’s just Ethernet. This
Jensen, CEO, NVIDIA3: morning, you just presented a new product, the NVIDIA RTX Pro server. What are could you give us some sense of the size of the opportunity of the use case of the customers you target with this new product?
Jensen, CEO, NVIDIA: So the RTX Oh, yeah. Yeah. I’m sorry. I got it. The world’s enterprise today has no AI.
Just go to every single large company. Just look at them. Just you pick your favorite large company. How much AI did you use in your data center? Almost zero.
All these data centers all over Europe, zero. And how do you bring AI to that data center? Nothing. Because they it’s not liquid cooled. They need to run Red Hat Linux.
They have to run VMware. They have to run Nutanix. They wanna run NetApp. They wanna does that make sense? It’s a bunch of strange things that cloud service providers don’t have to worry about.
So I’m not well, I’m not sure which one’s strange, but bringing AI to traditional enterprise IT is very hard. So the architecture has to be obedient of the past but innovative for the future. Like, for example, RTX Pro runs Windows. That’s pretty crazy. You know?
And so so it’s a it runs Windows. It runs hypervisors. It runs all the things that IT managers know. They go, oh, that makes me happy. Yeah.
Oh, so how big is the opportunity? Hundreds of billions. The world’s the world’s IT, enterprise IT is now just getting AI. That’s why Cisco, Dell, HP, everybody, so excited about it. All of our part the entire storage industry standard standardized behind it.
Lou Misocha, Analyst, Daiwa Capital Markets America: Back over here. Lou Misosha again, Daiwa Capital Markets. Maybe on on, that last answer, maybe you could just, point to the two or three things that you just announced today that are the most impactful for the near term as you’re trying to drive AI into the future.
Jensen, CEO, NVIDIA: Ignoring GB two hundred and three hundred since you guys have already heard about that. We’re gonna grow hundreds of billions of dollars of GB two hundred and three hundred just just just once we take that off the table. Okay? Assuming we already talked about that. RTX Pro, no doubt.
It is it is the the universal, you know, AI system that you can integrate into a traditional enterprise IT organization. My IT organization doesn’t know how to use g b 200. I gotta go build a separate cloud for them. Their their data centers says Red Hat, you know, hypervisors, Nutanix, NetApp, those kind they use phrases like that. These are not AI cloud people.
And so I need you’re not gonna change them because they’ve got too much software running. They have too they got a company to run. So we have to augment AI into them, and this is the way to do it.
Lou Misocha, Analyst, Daiwa Capital Markets America: And what’s availability of it?
Jensen, CEO, NVIDIA: It’s in production now. Availability now. Yeah. Please tell everybody to buy it.
Tim Schultz, Analyst, Redburn: Okay. Thank you.
Jensen, CEO, NVIDIA2: Thank you. It’s Francois from UBS. So I have a quick question on sovereign AI. I mean, high topic, so I’ve been here on Europe. Is there any control that you can make to these announcements in a way that if you are thinking about all this demand potential, if I’m a country, I want to build as much capacity and as quickly as possible because I don’t know where the demand is going to go, but it’s going to be big.
So I need to build capacity big and fast. How do you control that? Because obviously, if you do 20 gigawatt what factory, 40,000,000,000, 50,000,000,000 per gigawatt, That’s a lot of money. So are there any milestone that when you work this project, you say, well, maybe in a five years view or maybe two years view, and then let’s see how you are in one year view just to rationalize instead of having like one big year when you install all this capacity, you can do it more in a smooth manner. So I was just wondering how you deal with all this sovereign AI.
And yeah.
Jensen, CEO, NVIDIA: It’s get it gets built incrementally, like you say, anyways. You know, over the last last couple of years, these companies have been have been building up their offtake. You know, we come offtake. They’ve been building up their their demand. And and so so it gets built up like that anyway, step by step.
No nobody nobody puts five gigawatts down and then wait for supply wait for demand. That’s not gonna happen. That’s right. But you have to start. Because if you believe if you believe in AI in the future, you have to back off and say, okay.
I need the land. I need the shell. I need the power. Either come off the grid. It’s either gonna be generated.
You know, there’s a whole bunch of questions to line up for long before building the AI supercomputer. And so the the important idea is that we’re now talking about infrastructure. And so these are infrastructure timelines. And we’ve been talking to Europe now for some time. This just happens to be the visit where we talk about it with all of you.
But, you know, the infrastructure’s being discussed for well over a year now. Yeah.
Brett Simpson, Analyst, Arity: Thanks. This is Brett Simpson again. I just wanted to follow-up. Jensen, what do you think the useful life of NVL 72 is gonna look like? I mean, if I look at a lot of your customers, they’re depreciating over different periods.
I don’t know if there’s a comparison between Hopper and Blackwell, but do you think you can improve the useful life of the racks more? I mean, you got 1,200,000 components, I think, in these in these racks. But how long do they last?
Jensen, CEO, NVIDIA2: Yeah.
Jensen, CEO, NVIDIA: Two two answers. One of it is the useful life, and then the other one is your accounting accounting life. I mean, most people might account it for either four to five years, you know, depreciate over four to five years. But their useful life is gonna be five, six years, seven years. And the reason for that is you just have to go back and and you might hear you you might hear us talk about it.
Like, for example, this last two years, we improved the performance of Hopper by four times. Four times. In the last two years, software running on x 86 improved zero times. We improved our software performance by four times. The reason for that is accelerated computing is fundamentally different than CPUs.
There’s a JIT, just in time compilation, that sits inside CUDA. CUDA is a virtual machine. I can change the software, improve the performance with new algorithms long after you bought the silicon. And we are dedicated to that forever. That’s the reason why NVIDIA is doing so well.
Because we go back and help you improve your performance for as long as we shall live. I’ve got mountains of people doing that. You would never do that for architectures that are the installed base is so small. You do that for CUDA because if you do that for Hopper, you benefit how many people? And so researchers, software developers, you know, people who work on software love doing this because it it helps billions of dollars of infrastructure.
And so so Hopper keeps getting better. Ampere. I’m still optimizing on Ampere. Ampere is now, what, five, six years old? So two different questions.
I think they’re accounting life. That’s up to them. But I think they’re gonna find usefulness for years after, just as the the cloud service providers are. They’re very happy with the old stuff.
Luciano, Analyst, Impacts: Thanks. One maybe for Colette. So AI, when you think about AI, maybe the biggest TAM ever for any company. So you would expect companies to not care that much about margins, just try to capture the market. However, you guys have not only amazingly growth, but also super high margins.
So just trying to understand how you think about that equilibrium between growth and margins when thinking about EPS growth. Thanks.
Unidentified speaker: Yeah. Our our work that the teams have done building out the systems is not something to say that it’s just a cost plus. It’s been a enormous feat through software and complete engineering from the hardware standpoint to put what we put together. So we’ve looked at it always and every time we do it from a TCO value. What can we provide to these customers and what is their next best option that they could do?
And that’s how we determine an important part, which is the price. The cost structure and even the price from the very onset is something that we will always keep about the same price and we’ll continue to fine tune from the cost perspective. So then now comes down, where do we make the investments in terms of our work? And the more and more that we can think about strong, new strategic investments that we can make to continue to see our platform grow worldwide. And that doesn’t mean by saying I’m gonna create many different options in this chip.
It just says, look at the total piece as a whole. Most of the work and what you saw today was talking about CUDA, the software and everything that we need to do. Long way to go from getting to the enterprises. The enterprises still need a lot of change management on their existing software they’re using. And so our expansion and where we want to continue is take that platform and now enable every type of software system that’s out there from the combination of what we put together.
Yes, the margins are a strong margin. We continue to be a company quite thoughtful in terms of our investments. This isn’t been a time where we’re hiring tons and tons and tons of people because that doesn’t necessarily always help you. But we will continue to make the best investments whether that is in operating perspective or using our cash. Those two things together will I think continue to enhance the true value that we can provide to investors of our full p and l to do so.
A few lines.
Jensen, CEO, NVIDIA: This is what happens where you’re being interrogated. This will be the last question. No pressure.
Tim Schultz, Analyst, Redburn: Thanks, Tashiya. It’s Tim again at Redburn Atlantic. Maybe just on the NIMS, NEMO. You’ve talked about how you developed CUDA, how that’s just an incredible part of the moat. When you think about NIMS and NEMO, could you just maybe talk about its significance in the sort of hyperscaler world today?
Is is NIMS, NEMO a more important part of your moat when you get into the enterprise? Maybe just to kinda give us some sense of just how big a deal it is within that overall.
Jensen, CEO, NVIDIA: Yeah. Great great question. If you were if you were OpenAI, you know how to build NIMS. If you were Google, you know how to build them yourself. The entire packaging of that runtime, super hard.
The amount of software that’s inside, we call it a NIM. Thank you. We call it NIM. But the amount of software inside of it, there’s CUDA, CUDNN, Cutlass, TRTLM, Triton. The the the it it is a bay it’s basically a ChadGPT in a box.
You download it. You’re talking to it. It’s an AI. You download it. You say, here’s a video.
Tell me about this video. Reason about it. Why did it do what it did? What’s it gonna do next? It’s weird.
It’s basically an AI in a container. Well, for most of the cloud service providers, they know exactly how to do this. For everybody else, they have no clue. And they shouldn’t have to. We should turn it into something like a Nim.
It’s the modern it’s the modern way of packaging AI. Does it do you guys understand what I’m saying? A long time ago, in 1919, you know, 1993, is it? 1991? You know, the the retail box of Windows.
You know? They figured out how to package software. Started the start of the software industry. I kinda went the time I we we thought about NIMS, I’m going, it’s like it’s like they figured out packaging. We gotta figure out packaging for AI so that everybody can easily absorb it and and enjoy it.
And what Colette said earlier makes she made a super important point, which is, remember, one of the reasons why we’re able to deliver the value and prove it so is because the entire system of the GPU, the NVLink, the switches, the spine, the software, everything got integrated and delivered a performance level that’s 40 times higher. Are you guys following me? And dynamo, you can’t there’s no 40 times in an ASIC. You’re not gonna go hopper to black wall. Hey.
Look at that. 40 times. Moore’s law doesn’t let you do that. Isn’t that right? You don’t have 40 times more transistors.
How could you get 40 times more flops? And so the question is, how did we get 40 times more performance? Because we architected everything in whole, and we can deliver the software to do so. Otherwise, you’re limited by gross margin plus on TSMC wafers. Does that make sense?
There you simply can’t do what we do without understanding the big picture, architecting everything at one time, distributing the work across, you know, pulling out these amazing things that delivers the throughput. That customer goes, you know what? I get it. I believe it. You’ve been doing it every time.
I buy it. And then they’ll they’ll they’ll they’ll appreciate the value. And we can talk about value instead of cost. Okay. It’s great to see all of you in Europe.
Thank you.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.