NVIDIA at GTC Financial Analyst Q&A: AI Infrastructure Expansion

Published 19/03/2025, 19:04
© Reuters.

On Wednesday, 19 March 2025, NVIDIA Corporation (NASDAQ: NVDA) held its GTC Financial Analyst Q&A, presenting a strategic shift towards AI infrastructure for cloud, enterprise IT, and robotics. Led by CEO Jensen Huang, the discussion highlighted NVIDIA’s ambitions to modernize enterprise IT and dominate the AI landscape. While the company aims for growth in AI infrastructure, it also faces challenges such as economic downturns and tariffs.

Key Takeaways

  • NVIDIA is focusing on AI infrastructure, partnering with Dell, HPE, and Cisco to transform enterprise IT.
  • The company’s robotics business, valued at $5 billion, is expanding rapidly with key partnerships, including GM.
  • NVIDIA anticipates improved gross margins with the ramp-up of the Grace Blackwell architecture.
  • The company is preparing for potential tariffs by planning onshore manufacturing in Arizona.
  • NVIDIA aims to maintain dominance over custom ASICs with its comprehensive AI platform.

Financial Results

  • Gross margins are expected to improve as the Grace Blackwell architecture scales up over the next three to four years.
  • NVIDIA foresees large AI factory projects, with investments potentially reaching hundreds of billions of dollars.
  • Current data center spending is projected to hit a trillion dollars.

Operational Updates

  • NVIDIA is expanding its focus to include AI infrastructure for cloud, enterprise IT, and robotics, addressing computing, networking, and storage.
  • The emphasis on "AI factories" marks a shift towards single-function data centers dedicated to AI.
  • The company’s robotics business, valued at $5 billion, includes self-driving cars and robotic warehouses.

Future Outlook

  • NVIDIA is preparing for onshore manufacturing, leveraging TSMC’s investment in Arizona to mitigate potential tariff impacts.
  • The company laid out a roadmap of three years, emphasizing its role as an infrastructure provider rather than a consumer products business.
  • Companies investing in AI are expected to increase their focus on AI if a recession occurs.

Q&A Highlights

  • NVIDIA’s GPUs have a longer lifecycle, lasting three to four years longer than competitors, and remain useful for data processing.
  • The company plans to continue using copper for Nvlink as long as possible, with a shift to silicon photonics if necessary.
  • Homogeneous clusters are favored for higher data center performance, as every computer is fungible.

For a detailed understanding, please refer to the full transcript below.

Full transcript - GTC Financial Analyst Q&A :

Jensen: Sorry. It’s late. I was on TV. I’m just kidding. Chuck and I were on TV.

Chuck Robbins, Cisco.

Unidentified speaker: All good. And also on Kramer this morning. So just a few interviews, right?

Jensen: Yes. That was fun.

Unidentified speaker: Okay. Great to see everybody, both yesterday as well as, last night at our cocktail hour. This opportunity to speak with the Jensen and really talk about what this meant to our investor community in terms of our announcement at the GTC. I kindly remind you to look at our disclosure statement, this fine print in front of us, and then I want to make sure there’s one announcement for you all. Toshiya Hara is here with you, but he is with you as our new lead of investor relations.

He started just about yesterday and he’s now been here a good forty eight hours, so please make sure you ask him a ton of questions in terms of there. We’re really pleased to bring him on board, bring him on board here to both California and his many, many years in terms of, semis working at Goldman. Truly, truly excited to have him as part of the team in the whole. With that, I’m gonna turn the mic over to Johnson for some opening remarks.

Jensen: Good morning. Great to see all of you. Let’s see. We announced a whole lot of stuff yesterday, and, let me put it all in perspective. The first is, as you know, everybody’s expecting us to build AI infrastructure for cloud.

That, I think, everybody knows. And, the good news is that the, understanding of r one was completely wrong. That’s the good news. And the reason for that is because reasoning should be a fantastic new breakthrough. Reasoning, includes better answers, which makes AI more useful, solving more problems, which expands the reach of AI, and, of course, from a technology perspective, requires a lot more computation.

And so the computation demand for reasoning AIs is much, much higher than the computation demand of one shot pre trained AI. And so I think everybody now has a better understanding of that. That’s number one. Number two, and so so the first thing is inference, Blackwell incredibly good at it, building out AI clouds, the the investments of all the AI clouds continues to be very, very high. The demand for computing continues to be extremely high.

Okay. So I think that that’s the first part. The the part that I think people are are starting to learn about and and, we announced yesterday, I and and I I I I’ll just take, I could have done a better job explaining it, and so I’m gonna do it again. In order to bring AI to the world’s enterprise, we have to first recognize that AI has reinvented the entire computing stack. And if so, all of the data centers and all the computers in the world’s enterprises are obviously out of date.

And so just as we’ve been remodernizing the world’s AI clouds, all the world’s clouds for AI, it’s sensible we’re gonna have to re rack, if you will, reinstall, modernize, whatever words, the world’s enterprise IT. And doing so is not just about a computer, but you have to reinvent computing, networking, and storage. And so I didn’t give it very much time yesterday because we had so much so much content that that part, which represents about half of the world’s capex, enterprise IT, representing about half of the world’s CapEx. That half needs to be reinvented, and our journey begins now. Our partnership with Dell and HPE and, this morning, the reason why Chuck Robbins and I were on on, on CNBC together, is to talk about this reinvention of enterprise IT.

And Cisco is gonna be a net NVIDIA networking partner. I announced yesterday, basically, the entire world’s storage companies have signed on to be NVIDIA’s storage technology and storage platform partner. And, of course, as you know, computing is an area that we’ve been working on for a long time, including building some new modular systems that are much more enterprise friendly. And so, we announced that yesterday. Spark, DGX Spark, DGX Station, and all of the different Blackwell systems that are becoming from the OEMs.

Okay. So that’s second. So now we’re building AI infrastructure not just for cloud, but we’re building AI infrastructure for the world’s enterprise IT. And the third is robotics. When we talk about robotics, people think robots, and this is a great thing.

It’s fantastic. There’s nothing wrong with that. The world is tens of millions of workers short. We need lots and lots of robots. However, don’t forget the business opportunity is well upstream of the robot.

Before you have a robot, you have to create the AI for the robot. Before you have a chatbot, you have to create the AI for the chatbot. That chatbot is just the last drop last end of it. And so in order for us to enable the world’s robotics industry, upstream is a bunch of AI infrastructure we have to go create to teach the robot how to be a robot. Now teaching a robot how to be a robot is much harder than in fact even chatbots for obvious reasons.

It has to manipulate physical things and it has to understand the world physically. And so we have to invent new technologies for that. The amount of data you have to train with is gigantic. It’s not words, it’s video. It’s not world, it’s not just words and numbers, it’s video and physical interactions, cause and effects physics.

And so so that new adventure we’ve been on for several years, and now it’s starting to grow quite fast. Inside our robotics business includes self driving cars, humanoid robotics, robotic factories, robotic warehouses, lots and lots of robotic things. That business is already many billions of dollars. It’s at least $5,000,000,000 today, the automotive industry, and it’s growing quite fast. Okay?

And yesterday, we also announced a big partnership with, GM, who’s gonna be working with us across all of these different areas. And so we now have three AI infrastructure focuses, if you will, cloud data centers, enterprise IT, and robotic systems. I would say those three buckets, I’ve talked about I talked about those things yesterday. And then foundationally, of course, we spoke about the different parts of the technology, pre training and how that works and how pre training works for reasoning AIs, how pre training works for robotic AI, how in reasoning inference impacts, computing and therefore, directly how it impacts our business. And then answering a very big question that a lot of people seem to have, and I never understood why, which is, how how important is inference to NVIDIA?

You know, every time you interact with a chatbot, you interact with AI on on your phone, you’re talking to NVIDIA GPUs in the cloud. We’re doing inference. The vast majority of the world’s inference is on NVIDIA today. And it is an extremely hard problem. Inferencing is a very difficult computing problem, especially at the scale that we do.

And so I spoke about inferencing technology, but at the technology level, I spoke about that. But at the industrial level, at the business level, I spoke about, AI infrastructure for cloud, AI infrastructure for enterprise IT, AI infrastructure for robotics. Okay? So I’ll just leave with that. Thank you.

Ben Reitzes, Melius Research: Hi. It’s Ben Reitzes with Melius Research. Thank you for having us. This is obviously a lot of fun. I said that at the last meeting.

And, Jensen, I was thinking about this question quite a bit. I want to ask you a big question, a big picture question about TAM. You talked about your share of data center spend last year or data center spend as your TAM and your share was about 25%, thirty % last year. You used the Del Oro forecast, they go to a trillion dollars, but that’s about a 20% CAGR. So the street here has your data center growing about 60%, but then slowing to the rate of 20% thereafter.

But I’m just thinking, like, you’re in all these areas that are growing faster than the market. You’re at a 25 ish percent share of the overall data center spend. I’m thinking of Del Oro. I doubt they have robotics and AV data center infrastructure in there. So my question is, with that backdrop, why wouldn’t your share of data center spend go up over a three to five year period versus this 25%?

And, you know, why is that CAGR actually not low that Del Oro that you put up there? It doesn’t seem like, autonomous and robots are in there. So why wouldn’t your share go up Excellent. If you’re in all the right areas?

Jensen: Excellent question. Yesterday, I was always I I was always explaining it. Remember, I said two things. I said, one dynamic is that the world is moving from the general purpose computing platform to, GPU accelerated computing platform. And so that transition from the phase shift, that transition from platform shift from one to the other, is going to whatever the CapEx of the world, it is very, very certain that our percentage of that is gonna be much higher going forward.

It used to be a % general purpose computing, 0% accelerated computing, But it is well known now that out of a trillion dollars, the vast majority of it would be for accelerated computing. That’s number one. So whatever the forecast people have for data centers, I think NVIDIA’s proportion of that is gonna be quite large. And we’re not just building a chip, we’re building networkings and switches and, you know, we’re basically building systems components for the world’s enterprise, for the world’s data center. So that’s number one.

The second thing that I said that that nobody’s got right, none of these forecasts has it, this concept of AI factories. Are you guys following me? It’s not a multipurpose data center. It’s a single function AI factory. And these GPU clouds and and, Stargates and, so on and so forth.

Okay? These AI factories are not accounted. Do you guys understand? Nope. Because nobody knows how to go do that.

And these multiple hundred billion dollar CapEx projects that are coming online, it’s not part of somebody’s data center forecast. Are you guys following me? How could they possibly know about these things? We’re inventing them as as we speak. And so I think there’s two two ideas here, and I try to I didn’t I I didn’t I wasn’t I didn’t wanna be too prescriptive in doing so because nobody knows exactly, but here’s here’s some fundamental things that I do know.

I fundamentally believe that accelerated computing is the way forward. And so if you’re building a data center and you it’s not filled with accelerated computing systems, you’re building it wrong. And our partnership with Cisco and Dell and HP and all these enterprise IT gear, that’s what that’s about. Okay? And so that’s number one.

Number two, AI factories. GM’s gonna have AI factories. Tesla already has AI factories. Just as they have car factories, they have AI factories to power those cars. Every company that has factories will have an AI factory with it.

Every company that has warehouses will have AI factories with it. Every company who has stores will have AI factories for those stores to build the intelligence to operate the stores, to build the intel and that that store, of course, as you know, is an e tail store. There’s an AI that runs Amazon’s e e tail store. There’s an there’s an AI that runs Walmart’s digital store. In the future, there’s gonna be AI that runs also the physical store.

A lot of the systems will be AI, and the systems inside were physical systems, even robotics, will be AI driven. And so so I think the those parts of the world are just simply not quantified. Does that make sense? No no analyst has figured out that. Not yet.

It will be common sense here pretty soon. There’s no question in my mind that out of a hundred and $20,000,000,000,000 global industries, that a very large part of that trillions of dollars trillions of dollars of it will be AI factories. There’s no question in my mind now. Just as just as today, manufacturing energy manufacturing energy, they’re invisible too. Manufacturing energy is an entire industry.

Manufacturing intelligence will be an entire industry, and we are, of course, the factories of that. And so that layer has not been properly quantified, and it’s it’s the largest layer of what we do. It is the largest layer of what we do by far. But we are still going to, in the process, revolutionize how data centers are built, and it will be a % accelerated. I am absolutely certain before the end of the decade, a % of the world’s data centers will be accelerated.

CJ Muse, Canner: Yeah. Good morning. CJ Muse with Canner. Thank you for hosting this morning. One of the key messages yesterday was inference scaling laws are actually accelerating, led by test time scaling for enhanced reasoning.

So my question, how does the work NVIDIA is doing to push the inference Pareto frontier impact how you think about the relative sizing of the inference market, your competitive positioning? And then could you discuss Dynamo briefly? Is there a way to isolate the productivity gains from this optimization software that we should be thinking about? Thanks so much.

Jensen: Yes, really appreciate it. Back going backwards, as you know, people people say NVIDIA’s position is so strong because our software stack is so strong and and, our training component’s in it. It’s not just one thing because, training is a distributed problem, and the computing stack, our software runs on the GPU, it runs on the CPU, it runs in the NICs, it runs on the switches, it runs all over the place. And so you have to figure out which framework, which library, which systems to integrate it all into. And and because no one company in the world, develops the soft software holistically and totally for training, we have to break it up into a whole lot of parts and integrate it into a whole lot of systems, which is the reason also why, our our capability is so respected because it seems like everywhere you look, there’s another piece of NVIDIA software.

And and that is true. Now inference is software scaling at a level that nobody has ever done before. Inference is enormously complicated, and the reason, as I was trying to explain yesterday to give some give it some texture, some color, that this is the extreme the world’s extreme computing problem, because everything matters. Everything matters, and it’s a supercomputing problem where everything matters. And that’s very quite that’s quite rare.

Supercomputers and supercomputing applications are used by one person. It’s not used by millions of people at the same time, and everybody cares, and the answer matters. And so so, the amount of software we have to develop for that is is quite large, and we put it under the umbrella called Dynamo. It’s got a lot of pieces of technology in it. The benefit of Dynamo, ultimately, is without it, just as without all of NVIDIA training software, how do you even do it?

And so the first part is it enabled you to do it at scale. And one of the one of the really innovative, AI companies is a company called Perplexity, and and, they find incredible value in Dynamo and the work that we do together. And the reason for that is because they are they’re serving AI at very large scale. And, and so so, anyways, that’s Dynamo. The benefit to to it measurably is hard exactly how many x factors, but it’s less than it’s less than 10.

It’s probably less than 10, on the one hand, in terms of once you, you know, if you do a good job versus not doing a good job. But if you don’t have it, you don’t do it at all. Okay? So it’s essential, hard to put an x factor on it. With respect to inference, here’s the thing.

Reasoning generates a whole lot more tokens. The way reasoning works, the AI is just talking to itself. Do you guys understand what I’m talking about? It’s thinking, you’re just talking to yourself. And you’re and and on the one hand, it the reason why we we used to think that 20 tokens per second was good enough for chatbots, It’s because a human can’t read that much faster anyways.

So what’s the point of inferencing much faster than 20 tokens per second if the human on the other side of it can’t read it? But thinking, the AI can think incredibly fast. And so we’re now certain that we want the performance of inference to be extremely high so that the AI can think some of it out loud, most of it to itself. And so we now know that’s likely the way that AI is going to work, internal thinking, out loud thinking, and then final answer production, and then interactive part of it, which is which is, you know, getting more color, getting more explanation, so on and so forth. But the thinking part is no longer one shot.

And the difference between thinking versus not thinking, you know, is, like, just a knee jerk reaction of an answer. The number of tokens versus thinking tokens, it’s at least a hundred times. You know, the the point of putting a number on it was was almost I was I I wasn’t I just couldn’t I I put a I just put an an arrow on it because I know that there’s no way you could put an answer on how a simple an and yet, you know, as you know, people love simpler answers. And I you know, and you guys know that that I I always have a hard time with that, because it it always seems more complicated in my head. And and so so, but I think a hundred x is easily easily so.

Most likely, nearly all the time, a hundred x. Now here’s the part that is that is the the x factor on top of that. This is the part that that people don’t consider. You now have to generate a lot more tokens, but you have to generate it way faster. And the reason for that, nobody wants to wait until you’re done thinking, You know?

And it’s still an Internet service. And and the quality of the service and the engagement of the service has something to do with the response time of the service, just like search. If it takes too long, and it’s measurably so, if it takes too long, people just give up on it. They don’t wanna come back, and they won’t use it. And so now we have to take this problem where we’re producing we’re using a larger model because it’s more reasoning, more capable.

It also produces a lot more tokens, and we have to do it a lot faster. So how much more computation do you need? Which is the reason why Blackwell is just it came just in time. Grace Blackwell with NVLink seventy two, FP four, better quantization, the fast memory on Grace, every single part of the architecture people now go, man, you know, how did you guys realize all that? How did you guys reason through all of that?

And and how did you guys get all that ready? But now Grace Blackwell, Enbiling 72, just came just in time. And, I still think that 40x, that Grace Blackwell provides is a big boost over Hopper, but I still think, unfortunately, it’s many orders of magnitude short. And, but that’s a good thing. We should be chasing the technology for hopefully, we’ll be chasing the technology for a decade, if not more.

Stacy Rasgon, Bernstein Research: Great. Thanks. It’s Stacy Rasgon at Bernstein Research. We wondered if Blackwell and Hopper both gonna grow in the quarter. I’m I’m just kidding.

Jensen: You know, listen. Listen. Listen. Listen. That that joke that joke was kinda like my Pym Particle joke.

Okay? Yeah. That that’s that’s that’s the kind of stuff that’s funny only to your closest friends.

Stacy Rasgon, Bernstein Research: Probably. Hope hopefully, they’re all here. What I did wanna ask about

Jensen: Stacy, not one person on the Internet knows what we’re talking about. Everybody in this room is cracking up. That’s what I’m talking about.

Stacy Rasgon, Bernstein Research: I I I liked your PIM particle joke. What I did wanna ask about was the chart you showed yesterday, though, the the hopper, traction versus the Blackwell traction. And it showed, I think, it was 1,300,000 Hopper shipments for calendar ’twenty four. It talked about 3,600,000 Blackwell GPUs year to date. I I guess that’s 1,800,000 chips because it’s a two for one.

How do I interpret that chart? Because, like, 1,800,000 Blackwell chips would be, I don’t know, $50.60, $70,000,000,000 worth. Tracking seems great, but that seems like a lot. So, like, can you maybe just describe what that chart was actually trying to tell us and how to interpret it and what the, you know, the read for it is for for the rest of the year? Yeah.

Jensen: I really appreciate that. And I you know, this is one of those things where, Stacy, I I was I was arguing with myself whether to do it or not. And and here’s the question I was hoping to answer. Everybody’s going, Artis you know, R1 came. The amount of compute that you need is is, like, gone gone to zero.

And, and the CSPs, they they’re gonna cut back on and then there was rumors of somebody canceling. It was just it was so noisy. But I know exactly what’s happening inside the system. Inside the system, the amount of computation we need has just exploded. Inside the system, everybody’s dying for their compute.

They’re trying to rip it out of everybody’s hands. And and, and I I didn’t know exactly how to go and do that without just giving a forecast. And so so what I did was I the people they they were asking about were just the top four CSPs, really. That’s the those are the CapEx that everybody kinda monitors. And so the I I just took the top four CSPs, and I compared it year to year.

And and I I, obviously, therefore, underrepresented demand. Okay? I understand that. But I’m just simply trying to show that that the top four CSPs are are, fully invested. They have lots of blackwalls coming, and, and and the CapEx the capital investment that they’re making is solid.

Here here’s the things that I did include that obviously are quite large. The Internet services that are they’re not public clouds, but, you know, they’re Internet services. For example, X and Meta, and I didn’t include any of that. I obviously didn’t include enterprise, car companies and car, you know, AI factories, and I didn’t include international. I didn’t include, a mountain of startups, that that all need AI capacity.

I didn’t include Mistral. I didn’t include, you know, SSI, TMI, the the all of the great companies that are out there doing AI. I didn’t include any of the robotics companies. I basically didn’t include anything. And so that I understood, which which then begs the question, and I was asking myself the same thing on stage, which begs the question why did you do it?

But I was there were just so many questions about the CSP’s investments that I kind of felt like maybe I’ll just get that behind us. But I hope it did. I think that the CSPs are fully invested, fully engaged and, and there are two things that are driving them. One, they have to shift from general purpose computing to accelerated computing. That’s, you know, the idea of building data centers full of, you know, traditional computers, it’s not sensible to anybody.

And so nobody wants to do that. And so so everybody is moving to this new way of doing computing modern, modern, machine learning. And then second, there are all these AI factories that are just built for just one purpose only and that’s that’s not very well characterized and well, you know, followed by by most. And that I think in the future, these specialized AI factories, and that’s why I call it AI factories, these specialized AI factories, that’s really what the industry is gonna look like someday above this $1,000,000,000,000 of data centers.

Stacy Rasgon, Bernstein Research: What what was the number, though? That was what shipments plus orders to those four customers within the the first eleven weeks of the year? That’s what that 3.6?

Jensen: No. That 3.6 is is what they have ordered from us

Stacy Rasgon, Bernstein Research: Ordered. Okay.

Jensen: Of just of Blackwell’s.

Stacy Rasgon, Bernstein Research: Okay.

Jensen: Yeah. So far.

Stacy Rasgon, Bernstein Research: So far

Jensen: I know. The year just started.

Stacy Rasgon, Bernstein Research: Okay.

Jensen: Yeah. Exactly. Yep.

Stacy Rasgon, Bernstein Research: Thank you.

Jensen: Our demand is much greater than that, obviously.

Vivek Arya, Bank of America Securities: Thank you. Good morning. Hi, Jensen. Hi, Colette. Vivek Arya from Bank of America Securities.

Jensen: Hi, Steve. Vivek Arya.

Vivek Arya, Bank of America Securities: Thanks for hosting a very informative event. Denson, I had a near and sort of intermediate question. So on the near term, Blackwell execution and how it has and it’s an incredibly complex product, obviously, you know, very strong demand for it, but it has pressured gross margins a lot, right? I think some growing pains were to be expected, but we have seen margins go from high 70s to low 70s. So can you give us some confidence and assurance that as you get to Blackwell Ultra and Ruben that, you know, we should expect margins to start heading back that there will be more profitable products, you know, worse as what Blackwell has been so far.

So that’s kind of the near term. And then as we look out into 2026, Jensen, what are the hyperscale customers telling you about their CapEx plans in in general? Because, you know, there’s a lot of building of infrastructure, but from the outside, we don’t always get the best metrics to kind of visualize what the ROI is on these investments. So as you look out at 2026, what’s your level of confidence that their ability and desire to spend in CapEx can kind of stay on this pace that we have seen the last few years? Thank you.

Jensen: Yep. Our margins are going to improve, because, as I was explaining yesterday, we changed the architecture of Hopper to Blackwell, not just the chip to chip name, but we changed the system architecture and the networking architecture completely. And when you change the architecture that dramatically across the system, and now we’ve we’ve succeeded in doing so, There are so many components that are hard to exactly quantify cost and this and that, that now that it’s all accumulated, it’s challenging in the transition. Everybody’s cost is a little higher. Everybody’s new connector is a little higher.

Everybody’s new cable is a little higher. Everybody’s new, everything is a little higher. But now that we ramp up into production, we’ll be able to get those yields and those costs down, okay? So I’m very certain, I’m quite confident that yields will improve as we use this basic architecture now called Grace Blackwell, this new NVLink72 architecture and we’re going to ride this for about three to three point five years, four years, okay? And so we have opportunities between here and now that we’re ramped up to improve yield and improve gross margins.

And so that’s that. In terms of CapEx, today we’re largely focused on the CSPs. But I think very soon, starting almost now, you’re starting to see evidence that in the future, these AI factories will be built even, independently of the CSPs. We’re going to see some very large AI factory projects, hundreds of billions of dollars of AI factory projects. And that’s obviously not in the CSPs.

But the CSPs will still invest and they will still grow. And the reason for that is very clear. Machine learning is the future is the present in the future is you’re not gonna go back to hand coding. The idea of of large scale software being, you know, capabilities being developed by humans only as we’re sitting there typing. That’s that’s almost quaint.

You know, it’s cute. It’s funny, but it’s not gonna happen long term, not at scale. And so we now know that machine learning and accelerated computing is the path forward. I think that is a sure thing now. The fact that that every single CEO, understands this, the fact that every single technology company is here, that we have partnerships from from, you know, the Dells and HPs that are very understandable to Cisco, who have now, you know, also joined us, and and industries, healthcare industry is here and retail here and GM is here and car companies, startups to traditional, you’re starting to see that people realize that this is the computing method going forward.

And so I think that data center, the part that I do know very I have great confidence in is that the percentage of that purple is going to keep becoming gold. And remember that purple is compute CapEx. That has the opportunity to be 100% gold, right? And so I think that journey is fairly certain for me now, Okay? And then the rest of it is how much more additional AI factory gets built on top of that, is what we’ll have to discover into.

But the way I reason about the fact that it’s going to be significant is that every single industry in the world is going to be powered by intelligence. And every single company is going to be manufacturing intelligence. And out of that $120,000,000,000,000 or so in the world, that’s going to how much of it is going to be about intelligence manufacturing? Pick our favorite number, but it’s measured in trillions.

Tim Arcuri, UBS: Hi. It’s Tim Arcuri at, UBS. Thanks. Jensen, I wanted to ask about customer ASIC. And I asked because if, you know, we listen to some of the same CSPs that you put up on that slide and we listen to some of the, you know, some of the companies who are, you know, making custom ASICs.

Some of the deployment numbers sound pretty big. So I wanted to just hear your position, how you’re gonna compete with custom ASICs, how they can, you know, they can possibly compete with you, and maybe how some of your conversations with these same customers would sort of form your view in terms of how competitive Custom Asic will be to you? Thanks. Yes.

Jensen: First of all, just because something gets built doesn’t mean it’s great. Number two, if it’s not great, all of those companies are run by great CEOs who are really good at math. And because these are AI factories and it affects your revenues, not just your costs. It affects your revenues, not just your costs. It’s a different calculus.

Every company only has so much power. You just have to ask them. Every single company only has so much power. And within that power, you have to maximize your revenues, not just your cost. So this is a new game.

This is not a data center game. This is AI factor game. So when the time comes, that simple calculus, as I was using yesterday, that simple math that I was showing yesterday still has to be done, which is the reason why so many projects are started and so many are not taken into production. Because there’s a there’s always a another alternative. We are the other alternative, and that alternative is excellent, not normal excellent as you know.

Everybody is still trying to catch up to hopper. I haven’t seen a competitive hopper yet. And here we’re talking about 40x more. And so our roadmap is at the limits of what’s possible, not to mention we’re really good at it and completely dedicated to it. A lot of people have a lot of businesses to do.

I’ve got this one business to do. And we’re all in on this. 35,000 people doing one job, been doing it for a long time. The depth of capability, the scope of technology as you saw yesterday, pretty incredible. And it’s not about building a chip, it’s building an AI factory.

We’re talking about scale up, scale out. We’re talking about networking and switches and software. We’re talking about systems and these system architectures are insane. Even the systems itself. Notice, a % of the computer industry, a % of the computer industry has standardized on NVIDIA system.

Why? Because try to build an alternative. Building the alternative is not even thinkable, because look at how much investment we put into building this one. And so even the system’s hard. What people used to think system is just sheet metal, hardly sheet metal.

600,000 parts is hardly sheet metal. And so all of the technology is hard. We’re pushing every single dimension to the limit because we’re talking about so much money. The world is going to lay down hundreds of billions of dollars of investment in the next just a couple of two, three years. Let’s do the thought experiment.

Let’s say you want to stand up a data center and you want it to be fully operational in two years’ time. When do you have to place the PO on that? Today. So let’s suppose you had to place a $100,000,000,000 PO on something. What architecture would you place it on?

Literally based on everything you have today, there’s only one. You can’t reasonably build out giant infrastructures with hundreds of billions of dollars behind it, hoping to turn it back on and to get the RIC on it unless you have the confidence that we are able to provide you and we can provide you complete confidence. And singularly so, we’re the only technology company where if I had to go place a $100,000,000,000 on an AI factory. That’s interesting, I did. Literally the only company who is willing to place $100,000,000,000 POs across the industry to go build it out.

And you guys know that’s our that’s the depth of our supply chain. And we we are and we have. Give me another. Give me another one that has that depth and that length. And now to the point where we’ve got to go and work with the supply chain upstream and downstream to prepare the world for hundreds of billions of dollars working towards trillions of dollars of AI infrastructure build out.

Our partnerships with power companies and all of the cooling companies, the Vertus, the Schneiders, our partnerships with BlackRock. The partnership network necessary to prepare the world to go build out trillions of dollars of AI infrastructure, that’s on under that’s undergoing as we speak. What architecture and what ASIC chip do you go select? That doesn’t even make sense. It’s a weird conversation even.

And so so I think that that one, that the game is quite large. The investment level, therefore, the risk level is quite high. And so the certainty that you’re you’re selecting the best is quite important. And the certainty that you can execute, vital. We are the company you can build on top of.

We’re the company, we’re the platform that you can build your AI infrastructure on, and we’re the company that you can build your AI infrastructure strategy on. And so I think it includes chips, but it’s much, much more than that.

Mark Lipotzis, Evercore ISI: Hi. Mark Lipotzis from Evercore ISI. Thank you so much for the informative presentation yesterday and sharing your insights today. Jensen, you brought up the expression homogeneous clusters, Severtog. Why is this concept important?

Why would it be better than a heterogeneous cluster? And and I think, like, an investor question would would be here, like, as your customers scale to 1,000,000 node clusters, is it your view that those will more likely be homogeneous than heterogeneous? Thank you.

Jensen: The answer to the last question, Mark, I appreciate the question, yes. But it wouldn’t be my style to just leave it there because your understanding is too important to me. So let me work on it backwards. So, you know, GTC, I always assume that because, you know, so you know, a lot the first GTC NVIDIA ever had was at was at the Fairmont Hotel. Okay?

And it’s not far from here, right next to, right next to, Great America. And, literally, the entire GTC was this room divided in half. Literally, a % of the audience was scientists because they were the only users of CUDA. Now, my recollection of GTC and and what GTC means to me is still that. Whole bunch of computer scientists, whole bunch of scientists, whole bunch of computer engineers, and we’re all building this computer computing platform.

And so so, I’ve always somehow had in my head that my GTC talks could be nerdier than usual. And so I show some charts that no CEOs would or should. And so so if you saw the the Pareto frontier, the frontier simply is a way of saying underneath that curve in our simulation was hundreds of thousands of other dots. Meaning that data center, that factory, depending on the workload, depending on the workload and depending on the work style, the prompt style. Remember, a prompt is how you program a computer now.

And therefore, every time you prompt it differently, you’re actually programming the computer differently, and therefore, it process it differently. And so depending on your style of prompting and, you know, deep research or just a simple chat or search, depending on that spectrum of questioning or prompting, depending on whether it’s agentic or not, okay, depending on all of that, the data center’s configuration’s different. And the way you configure it configure it is using a software program called Dynamo. It’s kinda like a compiler, but exact, you know, it’s a runtime. And it sets up the computer to be parallelizable in different ways.

Sometimes it’s tensor parallel, pipeline parallel, expert parallel. Sometimes you wanna put the computer to do more floating point. Sometimes you wanna use it to do more, token generation, which is more bandwidth challenged. And all of this is happening at the same time. So if every single computer has a different behavior or if every computer has a different programming model, the person who’s developing this thing called Dynamo would just go insane.

And the the computer would be underutilized because you never know exactly what is needed. You know, it’s kinda like in the morning, you need more of this, in the night, you need more of that. And so if everything was fungible, then you don’t care, which is the reason why homogeneous is better. Every single computer is fungible. It could be literally every from Hopper on, every single computer could be used for context processing or decode, from prefill to, you know, decode, from, tensor parallel to expert parallel.

Every single computer could be flexibly used in that way, running Dynamo. And so the utilization of the data center performance will be higher. Everything will be better. Energy efficiency will be better. And if if you had to do this, you know, if this computer can only be good for a pre fill or this computer is only good for decode or this computer is only good at expert parallel, You know, it’s kinda weird.

Too hard.

Joe Moore, Morgan Stanley: Yeah. Joe Moore Morgan Stanley. So very compelling conversation about reasoning and the hundred x improvement in compute requirements. But I guess one of the anxieties the market had was DeepSeek talking about doing that reasoning on fairly low end hardware, even consumer hardware in China, maybe forced to do this on low end hardware. So can you talk about that disparity and how does this complexity pan out for you guys?

Jensen: Yeah. They were talking about a technology called distillation. Distillation, you train the largest model you can. Okay? So, GPT four CHAT GPT four point zero, it’s I think it’s a couple of trillion parameters.

R1 is 680. Llama three is 400 or so. Is it $2.80, 4 hundred, something like that? I forget anymore. And the version that people run mostly is 70 b.

So r one’s six eighty, ChatGPT is 1.4, the next one I’m gonna guess is 20,000,000,000,000. And so you wanna build the smartest AI you can. Alright? Number one. The second thing you wanna do is you wanna distill it, quantize it, reduce the precision, quantize it, distill it, and into multiple different configurations.

Some of it, you might continue to run it in the largest form, because quite frankly, the largest is actually the cheapest. And let me tell you why. As you know, Joe, there are many problems where getting the smartest person to do it is the cheapest way to do it. There are actually a lot of problems like that. Getting the cheapest person to do something is not necessarily the cheapest way to get it done.

Do you guys agree? Are you guys following me? Did I everybody’s I don’t understand what you’re saying. No. In this audience, I hope that you understand.

Okay. So so it turns out that there are many problems where it would actually cost less in runtime to have the smartest model do it. And so depending on the problem you’re trying to solve, you’re gonna use the next you use the cheapest form. And so we take the largest and you distill it into smaller and smaller version. And even the smallest version is only, like, one one billion parameters, which fits on a phone.

But I surely wouldn’t use that to do research. You know what I’m saying? I would I would use the larger version to do research. And so they all have a place, but it’s a technology called distillation, and people just got worked up on it. And even today, you can go into ChatGPT, and there’s a whole bunch of configurations.

Do you guys see that? There’s there’s a is it is it, o three minutei or something like that? It’s a distilled version of o three. And and if you like to use that, you can. If you like to I use the the plain one.

And so so just up to you. Does it make sense? Yeah. It’s a technology called distillation. Nothing changed.

Yeah. And this is probably gonna have to be the last question. I wanna stay. I was late. Okay.

Yeah. Make the next meeting wait. I’m just kidding. The next meeting is lunch.

Aaron Rakers, Wells Fargo: Perfect. Thanks for taking the questions. Aaron Rakers at Wells Fargo.

Jensen: Nice to

Aaron Rakers, Wells Fargo: see you. Jensen, we talk a lot about the compute side of the world and and how these scaled up architectures are evolving. One of the underlying platforms that I feel are key to your success and your strategy is NVLink. And I’m curious as you move from ’72, and I know there’s some definitional difference around ’72 to 01/1944. Yep.

You talk about May. I’m curious, how do we think about the evolution of NVLink to support continued scaled up architectures and, you know, how important that is to your overall strategy? Thank you.

Jensen: Yeah. NVLink, I said yesterday, distributed computing distributed computing, just like this room, distributed computing, let’s say, and let’s say we have a problem we have to work on together. It is better you can get the job done faster if we actually have fewer people, but they were all smarter. They all did things faster. So you want to scale up first before you scale out.

Does that make sense? We all love teamwork, but the smaller the team, the better. We all love teamwork, but the smaller the team, the better. And therefore, you want to scale up the AI before you scale out the AI. You want to scale up computing before you scale out computing.

Now scale up is very hard to do. And back in the old days, monolithic semiconductors or otherwise the beginning of Moore’s Law was the only way we knew how to scale up. Does that make sense? Do you guys are you guys following me? Just make bigger and bigger and bigger and bigger and bigger chips.

At some point, we didn’t know how to scale up anymore because we’re at radical limits. And that’s the reason why we invented, and you didn’t see me you don’t remember it anymore, but back in the old days during GTC, I was talking about this incredible SerDes we invented, the most high speed energy efficient SerDes. NVIDIA is world class at building SerDes. I go so far as just saying we are the world’s best at building SERDES. And if there’s a new SERDES that’s gonna get built, we’re gonna build it.

And so whether it’s the chip to chip interface SERDES or the package to package interface SerDes, which enabled NVLink, our SerDes is absolutely the world’s best, always at the bleeding edge. And the reason for that is because we’re trying to extend beyond Moore’s Law. We’re trying to scale up past the limits of radical limits. Now the question is, how do you do that? People talk about silicon photonics.

There’s a place for silicon photonics, but you should stay with copper as long as you can. I’m in love with copper. Copper is good. You know, it’s time tested. It works incredibly well.

And so we want to stay with copper as long as we can. It’s it’s very reliable. It’s very cost effective, very energy efficient. And you go to photonics when you have to. And so that’s the rule.

And so we would scale up as far as we can with copper, which is the reason why NVLink 72 is in one rack, which is the reason why we push the world to liquid cooling. We push that knob probably two years earlier than anybody wanted to in the beginning, but everybody’s there now, so that we could rack up, scale up MV length to 72. And then with Kyber, our next generation system, we can scale it to five seventy six using copper in one rack. And so we should scale up as far as we can and use copper as far as you can and then use silicon photonics’ CPO if necessary. And therefore the necessary part, we’ve prepared the world to, we’ve now built the technology.

So in a couple of two, three years, we could start scaling out to millions of GPUs. Does that make sense? And so now we have we can scale up to thousands scale out to millions. Yeah. Crazy, crazy technology.

Everything is at the limits of physics.

Unidentified speaker0: Linkers, Jefferies. Appreciate the bonus time too. Thanks for taking the question. I wanna ask you, and

Unidentified speaker1: I know you

Unidentified speaker0: were kinda making a joke about you couldn’t give Hopper away. That’s not the point. You were trying to say the performance is so incrementally better. New sales are gonna go there very quickly. But I’m kinda curious about the life cycle.

You get this question a lot. I think in last earnings call, you talked about people are still using a one hundreds. Right? It’s diversity of workload. But I I think there was this perception that inference was a lesser workload.

You clearly made the case that now you need the best systems with Dynamo. Price per token is very important. So I I just wanna think about how, you know, you have seen really just greenfield deployments. But do you see people rip and replace? And and, you know, if power is the constraint, when do we see that, in in in terms of people just pulling out?

You know, what is the life cycle of the GPU these days now with the workload on both training and inferences?

Jensen: I really appreciate that question. First of all, the life cycle of NVIDIA accelerators is the longest of any ones. Would you guys agree? There you go. The lifecycle of NVIDIA’s GPUs are the longest of any accelerators in the world.

Why does that matter? It directly translates to cost. Easily, easily three years longer, maybe four. Easily three years longer, maybe four. The versatility of our architecture, you could run it for this and that, you could use it for language and images and graphics, you could use it.

Isn’t that right? All these data processing libraries, none of them have to be at the forefront. Even using Ampere for data processing is still an order of magnitude faster than CPUs. And they still have CPUs in data centers. And so we have a whole waterfall of applications they could put their older GPUs into and then use the latest generation for their leading edge work and use use their their leading edge for factory work, AI factories and such.

And so, I also said something. If if a chip’s not better than Hopper, quite frankly, you couldn’t give it away. This is the challenge of building these data centers. And the reason for that is because the cost of operation, the cost of building it up, the TCO of it, the risk of building it up, dollars 100,000,000,000 data center, okay, which is only two gigawatts, by the way, every gigawatt is about $40.50000000000 dollars to NVIDIA, right? Every gigawatt of data center is about $50,000,000,000 roughly, let’s say.

So every gigawatt, $50,000,000,000 And so when somebody’s talking about five gigawatts, that math is pretty clear. You just you’ve got to do all the math as I explained yesterday. And it becomes very clear that if you’re going to build that new data center, you need the world’s best of all these areas. And then once you build it up, you have to start thinking about how do you retire it someday. And then now the versatility of NVIDIA’s architecture really, really kicks in.

So we are not only the world’s best in, technology, so your revenues will be the highest. We’re also the best from a TCO perspective because operationally we’re the best. And then very importantly, the life cycle of our architecture is the longest. And so if our life cycle is six years instead of four, that math’s pretty easy to do. My goodness, the cost difference is incredible.

Yeah, people are starting to come to terms with all of that, which is the reason why these these very specific niche accelerators or point products are kind of hard to justify building up $100,000,000,000 data center with.

Unidentified speaker1: Thanks for the time. Pierre Gerard, New Street Research. That’s really music to my hear, this idea that NVIDIA chips have a very long lifetime and they’re actually amortized relatively rapidly on balance sheets. And it really makes me think in the future, data centers are going to have a business model that is very equivalent to the foundry business model. So you buy super expensive equipment to manufacture chips, you depreciate twenty years down the line.

And so I’m trying to think about how the industry is going to grow with that framework in mind. And I look at the economics, for instance, of OpenAI, the economics they shared with investors in the last round. And my understanding of them is in 2028, ’20 ’20 ’9, they are going to be deploying like a $250,000,000,000 data center and they’re going to use it at their frontier to develop their most advanced models. But then that big data center, if they want to move to the next frontier data center that might be like a $350,000,000,000 data center in the beginning of the next decade. They’ll have to drop off somewhere a $250,000,000,000 data center and we’ll have to find figure out how to fit in that data center so that the frontier of the industry can keep going towards like the more advanced technology.

And I’m still struggling to see how this is going to work, because probably the inference of OpenAI alone is not going to be enough to fill in such a big data center every year or every other year if they change their leading edge data center every year or every other year. Does that make sense?

Jensen: Yes. Thank you. Except for, yes, except for I don’t think they’re going to have any trouble. And the reason for that is this. The demand for inference will be larger than the demand for training.

It already is. The number of computers they use for inference the but the the response time is getting slower and slow. And during certain parts of the time, I can tell when you guys are all on it. I pretty much don’t get my answer back. Are you guys following me?

Obviously, that’s a problem. And obviously, they know that. And that’s the reason why they’re clamoring for more capacity. They’re just racing because the inference workload is just too high. Someday, I believe training is only 10% of the world’s of their capacity.

But they need that 10% to be the fastest of the of the 90 of the rest of the 100. Okay? And that so every year, they’re going to come up with a new state of the art. But the reason for that is because they want to stay at the leading edge. They want to have the best products.

Just like NVIDIA’s today, we’re going to be looking at the future

Unidentified speaker2: of the future. And so, we’re going to be looking at the future of the future. And so, we’re going to be looking at the future of the future.

Jensen: And so, we’re going to be looking at the future of the future. And so, we’re going to be looking at There are five companies out there that needs to stay ahead, and they will everybody will invest in a necessary state of the art technology to stay ahead. But that technology, that capacity that training capacity, ideally will represent 10%, five %, twenty % of their total capacity. Let’s use an example. You use the next example, TSMC.

TSMC’s prototype being fab that they run my tape out chips on, the latency is very, very it’s optimized for low latency because I need to see my chip my silicon, my prototypes as soon as possible. The cycle time for a prototype is only, let’s pick two months. But the cycle time for production is six months. So and then it’s the same equipment. And so they have a fab that is only for prototyping.

But that fab represents, I don’t know, 5%, three % of their overall capacity. The rest of their capacity is used for manufacturing. Inference. 3% is used for. And so I think if you if you told me that in the near future, OpenAI will invest $350,000,000,000 or, you know, pick a favorite number every single year, I completely, completely believe it.

And it’s just that the trailing capacity will be used for inference. And the revenues, will just have to support that. And I believe that the production of AI, the production of intelligence will support that level of scale.

Joe Moore, Morgan Stanley: Maybe this will be our last question.

Unidentified speaker3: Thanks. It’s Brett Simpson at Arate Research. Jensen, it feels like there’s a tipping point for reasoning inference at the moment, which is great to see. But a lot of folks in this room is concerned about the macro backdrop overall at the moment. They’re concerned about tariffs or potentially what impact tariffs have on the sector and maybe this leads to a U.

S. Recession of some sorts. And I’d love to get your perspective and maybe also Colette’s perspective, you know, just thinking through if there is a scenario where we see a US recession, what does that do to AI demand? You know, how do you think about, the impact on your business if this if this comes through?

Jensen: If there’s a recession, I think that the companies that are working on AI is going to shift more even more investment towards AI because it’s the fastest growing. And so, we will everybody every CEO will know to do, to shift towards what is growing. And so, second, tariffs. We’re preparing and we have been preparing to manufacture onshore. And, TSMC’s investment in $100,000,000,000 of fabs here in Arizona, the fab that they already have, we’re in it.

We are we are now running production silicon, in Arizona. And so we will manufacture onshore. The rest of the systems, will manufacture as much onshore as we need to. And so I think the ability to manufacture onshore is not a concern of mine. And our partners, we are the largest customer for many companies.

And so, they’re excellent partners of ours. And I think everybody realized that the ability to have onshore manufacturing and a very agile supply chain we have a super agile supply chain with manufacturing in so many different places we could shift things around. Tariffs will have a little impact for us short term. Long term, we’re going to have manufacturing onshore. Okay.

How about I take one more the last question has to be a happy question. Even though the tariffs wasn’t a sad question, it was but tariffs was a happy question. How about one more question? The the ultimate responsibility, sir.

Unidentified speaker4: Well, it’s, it might not sound so much like a happy question. It’s it’s Will Stein from Truist, but I’m hoping you can put a happy spin on it.

Jensen: Oh, Will.

Unidentified speaker4: Another question. Jensen, what do you see as the biggest technical or operational challenges today? You just mentioned with regard to U. S. Production, that’s not really weighing heavily on you.

What is and what is the company what are you doing to turn that into an opportunity that’ll result in even better revenue growth going forward?

Jensen: Yeah. Well, I really appreciate the question. And in fact, everything I do started out with a problem. Right? Almost everything that I do started out with a dream, or everything I do started with an irritation, or everything I do started out with a concern or some problem.

And so what are the things that we did? What are the things that we’re doing? Well, one of the things that you’re seeing us do, no company in history has ever laid out a roadmap, a full roadmap three years out. No technology company has ever done that. And the reason why we do it is because one, the whole world is counting on us.

And the amount of investment is so gigantic, they cannot have surprises. We’re in the infrastructure building business, not in the consumer products business. We’re the infrastructure business. Then the most important thing to our partners are things like trust, no surprises, confidence in execution, confidence in your ability to build the best. They use all of the same thoughts, words that you would use to describe, for example, TSMC.

Are you guys following me? All the same ideas. We are a infrastructure building company now. And so many of the things that you saw me do are reaction to that. And I know it looks quite strange for a technology company to sit here and tell you about all these things that that while you’re sitting here buying this one, I’m already telling you about the next one, and you’re about to place an order for next one, I’m already telling you about the one after that, and you’re just making it forcing you to live in regret all the time.

Okay. However, I also know this, because they need to run their business every single day. They can’t wait to run their business in three years and do a better job. They got to run it every single day. They have no choice, because we’re in the AI factory business.

So one, we’re in the AI infrastructure business. No excuses, no surprises, long roadmap. Two, we’re in the AI factory business. We got to make money every day. They’re gonna buy some every day no matter what.

No matter what I tell them about Reuben, they’re gonna buy back walls. There’s no question. No matter what I tell them about Feynman, I couldn’t I couldn’t wait to tell you guys about Feynman, but they’re still gonna, to I thought the keynote was long enough. And so they’re still going to right. Does that make sense?

We’re in the AI factory business. People have to make money every day. And then here’s the third thing is we are a we’re a foundation business for so many industries. AI is a foundational business for so many industries and we have only so far served the cloud. And in order for us to serve the telecommunications network and yesterday we announced with T Mobile and Cisco in The U.

S. We’re going to build a six gs AI RAN. And it’s all completely built on top of blackwalls. And that ecosystem, what’s their CapEx spend, $100,000,000,000 a year? That has to retooled and re architected and reinvented.

You can’t just do that in the cloud. We have to do that for AI industries. You can’t just do that in the cloud. We have to do that for AI enterprise IT. You can’t just rely on the cloud to do that.

There are different architectures, different go to markets, different software stacks, different product configurations, different purchasing style, and therefore the product has to fit the style of purchasing. So each one of these industries needs an AI infrastructure. So we now know three things about our company. We’re an infrastructure company, so the supply chain front and back behind us, everything from land, power and capital, we are a part of. Okay.

And so one, AI infrastructure AI factories and three, we’re AI foundation, foundational company that the whole world is depending on, and therefore, we have to bring the technology to them. It’s not just about AI to them, it’s about the entire computing platform to them. And that includes networking and storage and computing. And we have the might to do that. We have the technical skills to do that.

And so you heard all these things, frankly, in the keynote yesterday, while I had to also remember, while we have work to do, it also still had to be a little entertaining. And and so but we did a lot of work yesterday, and we set the blueprints out for not just our company, but for the companies that are here, the industries that are here, the companies that are here, the companies that are in here and the associated companies before and after us in the supply chain, so many companies were affected by what I said yesterday. And we’re laying the foundations for all of that. And so I want to thank all of you for coming to JTC. It’s great to see all of you.

This is an extraordinary moment in time. I do really appreciate the question and the comment about R1 and the misunderstanding of that. There’s a profound and deep misunderstanding of it. It’s actually a profoundly and deeply exciting moment and that it’s incredible that the world has moved towards reasoning AIs. But that is even then just the tip of the iceberg.

And, I hope to catch up with you guys again. We’re going to Computex. So I hope you’re coming to Computex. This year’s Computex is going to be gigantic, okay. So we have lots and lots of things to do at Computex this year because as you know, the ecosystem, the computing ecosystem starts there.

And, we got a mountain of work to do there. And so I look forward to seeing you guys there. Alright, you guys. Thank you.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers
© 2007-2025 - Fusion Media Limited. All Rights Reserved.