U.S. futures subdued as government shutdown stretches into second week
On Monday, 08 September 2025, Advanced Micro Devices (NASDAQ:AMD) presented its strategic vision at the Goldman Sachs Communicopia + Technology Conference 2025. The discussion highlighted AMD’s competitive roadmap in AI and data center markets, balancing optimism with challenges. Key points included AMD’s aspirations for market share growth and its focus on delivering total cost of ownership (TCO) value.
Key Takeaways
- AMD aims for a 20% market share in the GPU segment and currently holds a 41% share in server CPUs.
- The MI450 GPU series is expected to generate significant revenue upon launch next year.
- AI deployment is constrained more by infrastructure than technology, with AMD seeing increased CPU demand driven by AI.
- AMD’s strategy emphasizes strong customer relationships and a phased, multi-generational approach to product development.
Financial Results
- AI-Accelerated TAM: AMD projects a $500 billion AI-accelerated Total Addressable Market by 2028, a figure gaining credibility as AI adoption grows.
- Data Center GPU Revenue: Significant revenue is anticipated from the upcoming MI450 GPU series.
- Server CPU Market Share: AMD’s server CPU market share has grown to 41%, a substantial increase from zero seven years ago.
- GPU Market Share Aspiration: AMD targets a 20% share in the GPU market as an intermediate goal.
- Pricing and Gross Margin: AMD focuses on pricing strategies that reflect the value delivered, aiming for higher Average Selling Prices (ASPs) than competitors.
Operational Updates
- GPU Development: AMD is implementing a phased development plan for GPUs, with the MI300 series focusing on inference and the MI450 targeting leadership in all AI workloads.
- Rack-Level Solutions: AMD is enhancing system-level capabilities, partnering with VT Systems for the Helios rack design, which integrates with existing data center infrastructure.
- Software Support: Emphasis is placed on supporting key software frameworks and libraries to meet major customer needs.
- Customer Engagement: AMD collaborates closely with major hyperscalers, ensuring the MI450 meets their requirements.
Future Outlook
- GPU Market Strategy: AMD is building a comprehensive solution-level approach, integrating CPUs and networking with GPUs to deliver superior performance and TCO.
- AI Market TAM: The projected $500 billion AI-accelerated TAM is reaffirmed, with infrastructure availability being the primary constraint on deployment.
- Revenue Drivers: The MI450 GPU series is expected to drive revenue growth, with AMD preparing for seamless integration into customer infrastructures.
- Customer Expansion: AMD plans to expand its customer base with the MI355 and MI450, beyond initial targets.
- Product Roadmap: AMD aims for an annual product release cadence, balancing innovation with minimal customer disruption.
Q&A Highlights
- AI and CPU Demand: AI deployment is increasing CPU demand, as AI-driven systems generate higher workloads.
- Enterprise Adoption: AMD sees faster adoption in enterprises, with a 20% share premium in hyperscale markets.
For further details, readers are encouraged to refer to the full transcript of the conference call.
Full transcript - Goldman Sachs Communicopia + Technology Conference 2025:
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Okay, good afternoon, everybody. Welcome to the Goldman Sachs Community Telephone and Technology Conference. My name is Jim Schneider. I’m a Semiconductor Analyst here at Goldman Sachs, and it’s my pleasure to welcome Advanced Micro Devices and the EVP of Data Center Solutions, Forrest Norrod, with us today.
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: Thanks a lot.
Unidentified speaker: Thanks for being here.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Thank you. First, maybe we want to start very big picture and talk about AI as a broad topic. In contrast to many of the speakers here at the conference who are coming at AI from the perspective of applications, you’re coming up from the perspective of infrastructure. You know, what’s your high-level vision for where AI is going as a technology? Why does the world need it? How useful do you think it’s going to be? Do you really think the reality is going to live up to the level of investment we’re seeing today?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: I think the story is still in the very early innings, but the indications are super positive. You know, Advanced Micro Devices has been honored to be in the discussion with many of the leaders in AI model development for quite a few years. We’ve had a bit of a good perspective to see the development of the technology and the application in a very early sense. We have seen it and we’ve used it ourselves for both business process as well as engineering development. I would say our assessment is it’s still in its infancy, but super positive. I mean, we’re seeing on the software side, we’re seeing some pretty substantial improvements in productivity as well as time to develop code for both software as well as verification tasks. Increasingly, on the hardware design side, we’re using it for chip development as well.
We’re seeing all of the right early indications to say, look, this is going to develop or deliver, sorry, real business value. I think that’s the fundamental question. If it does, and we feel confident that it will, this is going to be a hugely transformative technology.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: In terms of the use cases, I think you as a company have stated that consumer AI applications are well ahead of the enterprise. I think that’s borne out by a lot of the data points we see in the market. What consumer use cases do you see in the industry today that excite you most in terms of both utility and modernization?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: I think on the consumer side, I’m going to pivot that a little bit because I think we’re increasingly seeing more indication of traction on the business side as well. We are an engineering company, and the thing that gets us most excited is seeing productivity enhancements flowing from use of AI as part of the engineering process. We’re starting to see that really materialize in a major way. I think that the consumer side is still going to be a fascinating area, and everybody likes to use their chatbot. Where this is really going to change the world is can we change business and development processes? That’s where I think we’re much more excited at this point.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: How do you think about some of the potential bottlenecks for AI deployment? Do you think raw computing power, networking power is still kind of limiting the pace of AI software deployments? If so, which is more of a limiting factor on compute or networking?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: I think that one of the most interesting challenges from a computer architecture point of view right now is, you know, AI is, by its nature, a very distributed problem, distributed in terms of inside of the GPU and increasingly distributed across many GPUs and very large systems, particularly when you get to agentic cases. Deploying AI systems means really deploying a number of different workloads, a number of different models supported by other applications across a very large network computer. That is an interesting computer science problem. Certainly, networking and communicating efficiently and effectively across these resources is perhaps an increasingly large part of this because slight inefficiencies in distributing the problem and networking it and coalescing results can make major impacts on the effectiveness and the efficiency of the deployment.
We do think the importance of networking, the importance of distributed systems from a software perspective as well, are going to be dominant factors in performance of these systems going forward.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: I want to pivot to AMD’s progress in business for a second. I think you’ve done a great job of ramping your GPU franchise over the past couple of years, going from very little sales to about $7 billion this year if the street is correct. What are the things that have worked out well for you so far or best for you so far? What areas have you seen maybe slower than expected progress?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: If you take a look at the way that we’ve approached the GPU market, in many ways, it’s similar to what we did on the CPU side. We took a strategy of building a multigenerational phased approach to gradually build up the competitiveness and differentiation of our solutions over multiple generations, thinking about how do we systematically get more and more competitive and take leadership positions in a larger number of workloads. We started, you know, on the MI, at least in the MI300 generation, we started with inference. We said, hey, look, we’re going to, we’ve got a, because we’re a chiplet architecture, we have the ability to have more memory than our competitor at the time. That translates into more efficient inference, particularly as inference scales out. We’re going to, you know, go after inference leadership in MI300, MI325.
We’ll build out our software ecosystem to make sure that we’re making that capability as accessible as fast as we can. We will systematically build out, you know, training capability in MI355, both on the silicon as well as the software side. It’s all culminating in our MI450 generation, which we’re launching next year, where that is for us our, you know, no asterisk generation, where we believe we are targeting having leadership performance across the board, any sort of AI workload, be it training or inference. Everything that we’ve been doing has been focused on the hardware and the software, and increasingly now at the system and cluster level as well, to build out that capability so it all intersects. MI450 is perhaps akin to our Milan moment for people that are familiar with our EPYC roadmap.
The third generation of EPYC CPUs is the one where we targeted having no excuses. It was superior. Rome and Naples were very good chips, and they were highly performant and the best possible solution for some workloads. Milan is where it was the best CPU for any x86 workload, period, full stop. We’re trying to view and plan for MI450 to be the same. It will be, we believe, and we are planning for it to be the best training, inference, distributed inference, reinforcement learning solution available on the market.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Interesting. If you look back and reflect over the past, biggest lessons you had over the last 12 to 18 months, how does that set up AMD up for success over the next two to three years, let’s say? What are the most underappreciated points of AMD’s progress, in your opinion?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: I think, again, we’ve tried to take a very systematic and thoughtful approach to the problem, gradually building up our capabilities and making sure that we’re delivering value at each step along the way. First getting good at inference and then building up our capability to allow customers to begin training with us, all culminating with bringing it all together in the MI450 generation. I think one of the reasons that we did that is we also acknowledged that, yeah, I mean, quite candidly, NVIDIA is a fantastic company. They’ve done a fantastic job, and they were well ahead. We had to catch up.
We also knew that in that time of catching up, the season over the last couple of years, or the last three or four years, has been the most important thing for the big model companies to get to train the next, each one of them to train their next frontier model to get the next level of capability the fastest. That’s driven most of the industry for the past few years. I think that’s driven NVIDIA’s success as they had the most mature ecosystem, and they had the fastest time to train promise.
We decided, with this multigenerational roadmap, to put the objective in place of, okay, when we get to 450, we’re going to be there the same time as when Vera Rubin was intended to be there, and we’re going to be there with that part that’s fully performant, the software stack that’s fully there, at least for the 80% of the market that’s constituted by the top 20% or so customers. We’ve focused on getting there in the 450 so that for training, there’s no excuses, and there’s no impediment, there’s no hesitation of, hey, if I’m training, I’ll be behind in this generation if I go with AMD. That’s been the learning for us, and that’s been the realization. 300, 325, 355, good for inference, a little bit behind in terms of time to introduction for training.
That’s been the thing that I think has slowed us down on the training progression, and we recognize that fairly early.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Maybe just thinking broadly, you know, you have broad kind of remit across data centers. How do you think about AMD’s total market opportunity in data centers on both the CPU and GPU side? Is there a specific market share you think you have the right to win, if you will? What’s the threshold of market share you would find kind of either encouraging or disappointing on the other hand?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: Yeah, you actually hit a source point with me. There’s no such thing as a market share that it’s our right to win. That’s something if I ever hear that from our internal teams, I say, you know, extinguish that from your, we have no fair market share.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: I’ll say all of them in the future.
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: Yeah, because at the end of the day, look, customers are going to buy the best possible product to meet all of their needs. If we’re not offering that to them, we don’t have any right to any portion of the market. What we’ve done on the CPU side is come out, I think, with a compelling roadmap, worked very closely with our customers over time. We’ve gotten the most recent quarter, we, according to Mercury, we’re at 41% share on server CPUs, up from essentially zero when we started this journey about seven years ago. Our share continues to grow there very rapidly on the CPU side. We’ve picked up about eight points of share in the last 12 months, and if anything, it’s picking up speed.
I think on the CPU side, the strength of our roadmap is such that the level of our customer engagements, both with the cloud customers as well as the broad end enterprise customers, continues to improve. I think our share will continue to grow there. I’m very confident that our share will continue to grow quite strongly on the CPU side. We aspire to absolute server CPU leadership in the relatively short period of time. On the GPU side, we built this roadmap not just at the GPU level, but really at the solution level with the right CPU and networking matched to that GPU that we think will deliver not just performance, but a compelling TCO value for the customers. We aspire with that roadmap to be a meaningful portion of the market.
What that means is I think if you’re not in strongly into the double-digit percentage, say 20% of the market, you’re not a meaningful player. We certainly aspire to get to be a meaningful player as an intermediate step and then, of course, continue to grow over time.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Fair enough. Now, you previously shared as a company, 2028 AI-accelerated TAM of $500 billion. Help us draw a line from where we are today to that future point in time from a market standpoint. Is that a sort of a straight line? Is it something that accelerates over time? How do you think about how the market TAM evolves? Is there anything that really needs to happen in terms of technology modernization or anything else for that to happen? I mean, is that number even now too low?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: I think our lead customers continue, and everyone sees this in the hyperscalers’ capital plans as well. They continue to be extremely bullish on the long-term prospects of AI. When we first articulated that $500 billion TAM number well over a year ago, I think we got a lot of raised eyebrows and questions about it. I think it’s much less questioned today. It’s because people are seeing the early results. In the end, if business value does not get realized by end customers from all of this technology, then this growth rate is going to slow down. We see enough evidence that that business value is there that we’re pretty optimistic that this is going to continue to grow at a pretty rapid pace. The pace right now, quite candidly, is modulated more by data center and power availability than anything else.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Yeah, I think it’s fair from what I hear too. Now, speaking with investors, I think probably the most debated number for Advanced Micro Devices is your data center GPU revenue for next year and 2026. Can you maybe help us think about what the key growth drivers are for that business and to the extent you have visibility on that? You know, what needs to happen for you to capture your desired goal in terms of revenue scale?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: I think the key for us is, you know, we’ve obviously just introduced the MI350 series a few months ago, but we are anticipating material revenue from the MI450, which we’ll be launching in about a year from now, in next year. We are expecting to see material contribution there. What’s going to drive that? It’s really continuing to work very closely with our end customers on preparing for their deployments of MI450. We’re getting a lot of extraordinarily positive response from our customers right now. You’ve heard from some of them at our Advancing AI Day back in June. You hear Sam Altman from OpenAI get up and talk about the very close partnership and feedback that they’ve been providing to us for the last few years and their excitement over the 450. You’ve heard the same thing from Oracle and a few others as well.
We are deeply engaged with quite a number of end customers on ensuring that as we wrap up the design, the validation now of the MI450 and the supporting rack level and cluster level infrastructure, we move that smoothly through the rest of the validation, get it ready for production with them, and then we ramp it efficiently and effectively into production with them. One of the things that we’ve really spent a lot of time and attention to is making sure that the rack level solution will move to market smoothly with a minimum of hiccups. We have, I hope folks will give us some credit for being very predictable in our execution on the data center side. I think we’ve got a good track record of doing what we said we will do for the last six or seven years.
That’s really come from a very rigorous development process that identifies risks and then takes them down in a very systematic way. A couple of years ago, as we were looking at the MI450, one of the obvious risks was shifting from delivering chips to literally delivering a rack level infrastructure. We very quickly decided to substantially bolster our system level capabilities. We contracted with VT Systems, and then we brought them on board to begin doing the development of what became our Helios rack level design over two years ago. Over the last two years, we’ve been very systematic at building up the design, proving out subsystem by subsystem, building out electrical, mechanical signaling, cabling, power, et cetera, subassemblies, prototyping them, proving them out, and getting the whole system ready for production. We’ve also made some interesting choices, I think, specifically to de-risk the design.
If you look at Helios, it’s very thoughtfully designed to be as compatible as possible at a data center level with alternatives that a customer might have. Things like making sure that the ratio of air cooling to liquid cooling within the rack is equivalent to or similar to NVIDIA, so the customers can build data centers with the right number of chillers. That’s an 18-month lead time item. If we require a substantially different number of chillers per hundred megawatts than NVIDIA does, that’s a problem. The customer has to make a decision maybe earlier than they’re willing to make a decision on AMD. We’ve worked through that. We very systematically work through all of the signal integrity, the cabling, a lot of the issues that we knew from our experience doing the supercomputers with HPE.
We designed half-megawatt cabinet systems years ago with HPE, and we learned a lot of lessons there. If you look at Helios, for example, it’s actually larger than an NVL 72 rack. It still has that same pod size, 72 GPUs per pod, but it’s bigger physically, which is not an issue for our customers, because the physical space is inconsequential. It’s bigger, and because of that, it’s easier to manufacture, it’s easier to support, it’s easier to service, and we believe it will be more reliable than a device that has been more focused on density for density’s sake.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Interesting. If you think about that, you mentioned a couple of them, the biggest challenges to kind of attaining that revenue profile that you desire in 2026, what are the risks or the challenges you see? Is it still things like software stack? Do you feel like it’s customer enablement? Do you feel like these things that you mentioned, whether that be full rack solutions or cooling, are now relatively de minimis risks from a technology standpoint?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: I think they’re all, I mean, we’re paying very close attention to a long list of items. I think we’ve got a very rigorous, again, a very rigorous de-risking plan in place, development and validation plan in place. I mean, obviously, yeah, it is a very complex rack level solution. There’s mechanical, potential mechanical issues, potential signal integrity issues, potential thermal issues. We’re trying to pay attention to all of those. I think we’ve got them all pretty well in hand. As well, on the software side, particularly for the lead customers, the 20 or so customers that really matter, that are going to drive the overwhelming proportion of the capital investment, we’ve been working very closely with them to make sure that the software that they require is going to be there in time. Maybe not the full long tail.
NVIDIA has done a great job investing in AI for many, many years, and they’ve got support for a very long tail of customers. We’re not going to be able to quickly match that, but we’re not trying. We’re trying to make sure that we are fully there at MI450 for the customers that really matter for the 80, 85% of the market.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Yeah, makes sense. Maybe talk about your progress with your sort of new prospects in terms of the U.S. hyperscalers and other customers who are not yet your customers at this point. You know, what key obstacles do you see, from a customer perspective, to them adopting a solution today?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: Fortunately, actually, unlike when we started with the CPU side, all of the major customers, every one of those ones that we just talked about is already an AMD customer, and we’ve already got a data center engagement with them. We’ve got familiarity with them. They understand us. They’re generally all using us on the CPU side. We’ve got a preexisting relationship in that, which is helpful. We’re not trying to build that up as we were at the beginning of the CPU side. We’ve been fortunate enough to have some great relationships on the MI side, on the Instinct side, with several of the major hyperscalers. Obviously, Microsoft, Oracle, Meta are the ones that are most prominent and public. OpenAI, of course, as a user. We’ve been engaged as well with several others.
I think that you will see in our next, even partially on MI355, and then certainly with MI450, you’re going to see the aperture of the end customers and the hyperscalers open up quite a bit. That’s based on the work that we’re doing with them right now and the very close collaboration feedback that we’re getting from them.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Would you say that that’s kind of giving, you know, that, is that the software piece is kind of getting you there more to like the widening aperture of customers, not getting to the very longest tail, but certainly widening beyond the initial set of target customers you had last generation?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: Absolutely. We’ve been fairly systematic about building out the support for the different frameworks, the libraries, the various open source projects that are relevant to these customers. Something like JAX, for example, JAX support is very important to a number of these customers. Our JAX support was relatively immature a year ago; it’s come a long way. We’re trying to be systematic about being fully complete for this targeted set of customers on the software side.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: As you think about your product roadmap going forward, what’s kind of driven your confidence to move this sort of annual cadence in such a competitive environment? I guess how are customers helping you prioritize and set your product roadmap today?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: I think the industry, given the excitement and the level of change and innovation, that’s what’s really driving this annual cadence. By the way, it’s painful for the industry. It’s painful for end customers to take products at this level of cadence. It’s a competitive imperative. One of the things that we’re trying to be thoughtful about doing going forward, and we did even on the MI300 to the 300 generation, is try to make sure that there’s commonality in the infrastructure, commonality in reuse in the infrastructure so that it’s not a complete rip and redo with every new generation, such that we’re containing the change to the things that matter, that give performance, but it makes it a little easier for the data centers and the customers to adopt.
It’s not quite the old tick-tock strategy that we used to have on the server CPU side, but it is trying to be thoughtful about we’ve got to maintain this rapid pace to be competitive. How do we make it as easy as possible on our customers to accommodate consumption of this technology on that pace?
Jim Schneider, Semiconductor Analyst, Goldman Sachs: I mean, philosophically, from a gross margin perspective, how do you think about pricing to the value you’re providing as you continue to sort of up the game on each new generation of technology? In other words, if the raw performance of MI350 is kind of on par with Blackwell and 400 series is going to be on par with Rubin, should we expect more pricing power and improving gross margins within sort of the data center GPU space for Advanced Micro Devices going forward?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: I think, you know, the two things. It’s an interesting market in that it’s concentrated in a few large deals. I guess, you know, NVIDIA, if I recall correctly, their most recent announcement is 50% of their revenues is for customers. It’s a very, very highly concentrated market, which tends to be extremely large customers. Large deals tend to put a great deal of pressure over time on margin, and then likewise, as, you know, there becomes a real competitive environment, that also will put some pressure. However, again, what really depends, what really drives what you can charge is the value that you’re driving for the end customer. Our focus, just as it has been on the CPU side, we’re, we’re candidly, our ASPs are quite a bit higher than our competitor on the CPU side.
We charge more for our CPUs than our competitor does, and the reason is because we’re giving a superior value. We’re giving performance, we’re giving reliability, we’re giving things that allow us to charge for that technology and for the customers to feel good about the price that they’re paying. We’re trying to take the same perspective into the Instinct side, into the GPU side, and really try to be cognizant of how do we at each generation offer a superior TCO writ large to our customers. They’re measuring that, of course, at the cluster level. You know, it’s not at the part level, at the cluster level. We’re trying to be very cognizant of designing the product for high performance and for superior TCO, and I think as long as we’re doing that, we’ll get appropriate return from the investment.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Great. Maybe just kind of quickly pivoting to the server CPU market for a second. Maybe you can kind of level set us, if you would, on how you see growth in that market, kind of prospectively. Is this driven by sort of, you know, replacement cycles, core counts, ASPs? Or how do you think about the structural growth in that market going forward?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: One of the most interesting things that’s happening right now on the CPU side is we actually are seeing AI driving additional new incremental demand on the CPU side as well. There’s almost a direct correlation that we’re now seeing, particularly over the last three to four quarters, between the companies that are most mature in deploying AI for their own business use. I’m not talking about training. I’m talking about using AI as part of their product offering or to improve their product offering in some way. We’re seeing the more mature a company is along that progression, the more incremental CPU, general purpose CPU demand that’s coming from them. We’re seeing quite a bit of uplifts in the CPU demand. If you think about it, some of it, it’s easy to talk about in sort of an agentic AI side.
Hey, AI agents are acting as users generating demands on existing applications for data or to generate results. Even for non-agentic flows, you’re seeing situations where you have systems that are considering far more possibilities. You’re doing an analysis, a financial analysis or a financial plan. If it was being done by a human, you might do three scenarios. If you’re using AI to do it, you might do 25,000 scenarios. We’re seeing tremendous increases coming. I think that, and you can draw a direct connection to AI use. Beyond that, we are seeing continued market share gains in both the cloud as well as the enterprise side. I think people are getting more and more familiar with and comfortable with AMD on the enterprise side, and we continue to increase our investments there.
We expect to see continued strong growth in the CPU franchise, driven both by AI-related increases as well as just continued market share gains.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Yeah. Lastly, to the point you just raised, where do you think your market share stands today in both hyperscale or cloud, as well as enterprise? Ultimately, do you think you can keep outgrowing the market and sort of see some level of plateau in your market share in either or both of those markets?
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: Yeah, I think in the, and in the extreme, of course, you can’t get above 100%. There is a plateau there somewhere. All kidding aside, look, we obviously are more represented in the hyperscale side. We’ve got very good share in North America hyperscale. We’re growing rapidly in Asia as well. That is an area where we do see the TAM expanding quite a bit right now because of AI. We actually see TAM expansion within the cloud segment that’s very strong, much stronger than we expected just even a few quarters ago. On the enterprise side, we’ve probably got about a 20 point, there’s probably about a 20 point share premium on the cloud side versus enterprise. Both of them are growing very rapidly, and the enterprise share is probably growing a little bit more rapidly.
I think that as enterprises get more and more comfortable and more and more aware of AMD, we’re seeing, and perhaps more aware of the overall environment that our competitor perhaps is in, they’re getting more comfortable with giving AMD a shot.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Fair enough.
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: When we get a shot, we generally win.
Jim Schneider, Semiconductor Analyst, Goldman Sachs: Great. With that, we’re out of time, but thank you so much for being here. We really appreciate it.
Forrest Norrod, EVP of Data Center Solutions, Advanced Micro Devices: Thank you so much.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.