Broadcom Inc. (AVGO), an $848.7 billion market cap semiconductor giant, reported strong financial results for the first quarter of fiscal year 2025, surpassing Wall Street expectations. The company’s earnings per share (EPS) came in at $1.60, outpacing the forecasted $1.51. Revenue reached $14.92 billion, exceeding the anticipated $14.62 billion. Following the announcement, Broadcom’s stock surged 13.98% in aftermarket trading, closing at $204.53. According to InvestingPro, Broadcom maintains its position as a prominent player in the Semiconductors & Semiconductor Equipment industry, with 15+ additional key insights available to subscribers.
Key Takeaways
- Broadcom’s Q1 2025 EPS of $1.60 surpassed expectations by 5.96%.
- Revenue increased by 25% year-over-year to $14.92 billion.
- The company’s stock rose 13.98% in aftermarket trading.
- AI revenue grew significantly, reaching $4.1 billion, a 77% increase year-over-year.
- Broadcom provided a positive revenue outlook for Q2 2025, anticipating $14.9 billion.
Company Performance
Broadcom demonstrated robust performance in Q1 2025, with a 25% year-over-year revenue increase. The company’s focus on AI technology has paid off, with AI revenue soaring by 77% to $4.1 billion. Broadcom continues to capitalize on the growing demand for AI infrastructure, positioning itself as a leader in hardware optimization for AI workloads.
Financial Highlights
- Revenue: $14.92 billion, up 25% year-over-year
- Earnings per share: $1.60, exceeding the forecast of $1.51
- Gross margin: 79.1%
- Operating margin: 66%
- Free cash flow: $6 billion, representing 40% of revenue
Earnings vs. Forecast
Broadcom’s actual EPS of $1.60 exceeded the forecasted $1.51 by 5.96%, marking a significant earnings surprise. The revenue of $14.92 billion also surpassed expectations, indicating strong market demand and effective operational strategies. This performance contrasts with previous quarters, where earnings generally aligned with forecasts, highlighting an exceptional quarter for the company.
Market Reaction
Following the earnings announcement, Broadcom’s stock experienced a notable 13.98% increase in aftermarket trading, closing at $204.53. This surge reflects investor confidence in the company’s ability to exceed earnings expectations and its strong position in the AI market. The stock’s performance stands out compared to its 52-week range, where it previously fluctuated between $119.76 and $251.88. Analyst consensus remains strongly bullish, with a consensus recommendation of 1.41 (where 1 is Strong Buy) and price targets ranging from $181.25 to $300 per share. For detailed valuation analysis and more insights, visit InvestingPro.
Outlook & Guidance
Broadcom has provided optimistic guidance for Q2 2025, projecting consolidated revenue of $14.9 billion, a 19% year-over-year increase. The company expects AI revenue to reach $4.4 billion, up 44% from the previous year. Broadcom’s continued investment in AI semiconductor technology and its strategic focus on custom silicon solutions are expected to drive future growth. The company has demonstrated strong dividend commitment, raising payments for 15 consecutive years with a 28.26% dividend growth in the last twelve months. Access complete dividend analysis and growth projections through InvestingPro’s detailed research report.
Executive Commentary
CEO Hock Tan emphasized Broadcom’s commitment to innovation and performance, stating, "We are stepping up our R&D investment on two fronts." He also highlighted the company’s competitive edge in hardware, saying, "It has become very clear that while they are excellent in software, Broadcom is the best in hardware." These comments underscore Broadcom’s strategic direction and confidence in its market position.
Risks and Challenges
- Supply chain disruptions could impact production and delivery timelines.
- Market saturation in AI infrastructure may limit growth opportunities.
- Macroeconomic pressures, such as inflation, could affect profitability.
- Regulatory changes could pose challenges to AI deployment strategies.
- Competition from other tech giants in the AI space remains a significant threat.
Q&A
During the earnings call, analysts inquired about Broadcom’s approach to securing new AI accelerator customers. The company discussed its selective strategy for design wins, emphasizing performance as a key criterion for hyperscaler selection. Additionally, executives addressed concerns about regulatory impacts, reassuring stakeholders of minimal effects on current AI deployments.
Full transcript - Broadcom Inc (AVGO) Q1 2025:
Conference Operator: Welcome to the Broadcom Inc. First Quarter Fiscal Year twenty twenty five Financial Results Conference Call. At this time, for opening remarks and introductions, I would like to turn the call over to Ji Yu, Head of Investor Relations of Broadcom Inc.
Ji Yu, Head of Investor Relations, Broadcom Inc.: Thank you, Sherry, and good afternoon, everyone. Joining me on today’s call are Hock Tan, President and CEO Kiersten Spears, Chief Financial Officer and Charlie Kowaz, President, Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed describing our financial performance for the first quarter of fiscal year twenty twenty five. If you did not receive a copy, you may obtain the information from the Investors section of Broadcom’s website at broadcom.com. This conference call is being webcast live, and an audio replay of the call can be accessed for one year through the Investors section of Broadcom’s website.
During the prepared comments, Haak and Kirsten will be providing details of our first quarter fiscal year twenty twenty five results, guidance for our second quarter of fiscal year twenty twenty five as well as commentary regarding the business environment. We’ll take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward looking statements made on this call. In addition to U. S.
GAAP reporting, Broadcom reports certain financial measures on a non GAAP basis. A reconciliation between GAAP and non GAAP measures is included in the tables attached to today’s press release. Comments made during today’s call will primarily refer to our non GAAP financial results. I’ll now turn the call over to Hock.
Hock Tan, President and CEO, Broadcom Inc.: Thank you, Ji, and thank you, everyone, for joining today. In our fiscal Q1 twenty twenty five, total revenue was a record $14,900,000,000 up 25% year on year and consolidated adjusted EBITDA was a record again $10,100,000,000 up 41% year on year. So let me first provide color on our semiconductor business. Q1 semiconductor revenue was $8,200,000,000 up 11% year on year. Growth was driven by AI as AI revenue of $4,100,000,000 was up 77% year on year.
We beat our guidance for AI revenue of $3,800,000,000 dollars due to stronger shipments of networking solutions to hyperscalers on AI. Our hyperscale partners continue to invest aggressively in their next generation frontier models, which do require high performance accelerators as well as AI data centers with larger clusters. And consistent with this, we are stepping up our R and D investment on two fronts. One, we’re pushing the envelope of technology in creating the next generation of accelerators. We’re taping out the industry’s first two nanometer AI XPU packaged in 3.5D as we drive towards a 10,000 teraflops XPU.
Secondly, we have a view towards scaling clusters of 500,000 accelerators for hyperscale customer. We have doubled the radix capacity of the existing TomOn five. And beyond this, to enable AI clusters to scale up on Ethernet towards 1,000,000 XPUs, we have taped out our next generation 100 terabit Tomahawk six switch running 200 gs SerDes at 1.6 terabit bandwidth. We will be delivering samples to customers within the next few months. These R and D investments are very aligned with the roadmap of our three hyperscale customers as they each race towards 1,000,000 XPU clusters by the end of twenty twenty seven.
And accordingly, we do reaffirm what we said last quarter that we expect these three hyperscale customers will generate a serviceable addressable market or SEM in the range of $60,000,000,000 to $90,000,000,000 in fiscal twenty twenty seven. Beyond these three customers, we had also mentioned previously that we are deeply engaged with two other hyperscalers in enabling them to create their own customized AI accelerator. We are on track to tape out their XPUs this year. In the process of working with the hyperscalers, it has become very clear that while they are excellent in software, Broadcom is the best in hardware. Working together is what optimizes via large language models.
It is therefore no surprise to us since our last earnings call the two additional hyperscalers have selected Broadcom to develop custom accelerators to train their next generation frontier models. So even as we have three hyperscale customers, we are shipping XPUs in volume today. There are now four more who are deeply engaged with us to create their own accelerators. And to be clear, of course, these four are not included in our estimated spend of $60,000,000,000 to $90,000,000,000 in 2027. So we do see an exciting trend here.
New frontier models and techniques put unexpected pressures on AI systems. It’s difficult to serve all classes of models with a single system design point. And therefore, it is hard to imagine that a general purpose accelerator can be configured and optimized across multiple Frontier models. And as I mentioned before, the trend towards XPUs is a multiyear journey. So coming back to 2025, we see a steady ramp in deployment of our XPUs and networking products.
In Q1, AI revenue was $4,100,000,000 dollars and we expect Q2 AI revenue to grow to $4,400,000,000 which is up 44% year on year. Turning to non AI semiconductors. Revenue of $4,100,000,000 was down 9% sequentially on a seasonal decline in wireless. In aggregate, during Q1, the recovery in non AI semiconductors continued to be slow. Broadband, which bottomed in Q4 twenty twenty four, showed a double digit sequential recovery in Q1 and is expected to be up similarly in Q2 as service providers and telcos step up spending.
Server storage was down single digits sequentially in Q1, but is expected to be up high single digits sequentially in Q2. Meanwhile, enterprise networking continues to remain flattish in the first half of fiscal twenty twenty five as customers continue to work through channel inventory. While wireless was down sequentially due to a seasonal decline, it remained flat year on year. In Q2, wireless is expected to be the same, flat again year on year. Resales in industrial were down double digits in Q1 and are expected to be down in Q2.
So reflecting the foregoing puts and takes, we expect non AI semiconductor revenue in Q2 to be flattish sequentially, even though we are seeing bookings continue to grow year on year. In summary, for Q2, we expect total semiconductor revenue to grow 2% sequentially and up 17% year on year to $8,400,000,000 Turning now to Infrastructure Software segment. Q1 Infrastructure Software revenue of $6,700,000,000 was up 47% year on year and up 15% sequentially, exaggerated though by deals which slipped from Q2 Q4 into Q1. Now this is the first quarter Q1 twenty twenty five where the year on year comparables include VMware in both quarters. We’re seeing significant growth in the software segment for two reasons.
One, we’re converting to a footprint of large sorry, we’re converting from a footprint of largely perpetual license to one of full subscription. And as of today, we are over 60% down. Two, these perpetual licenses were only largely for compute virtualization, otherwise called vSphere. We are upselling customers to a full stack VCS, which enables the entire data center to be virtualized. And this enables customers to create their own private cloud environment on prem.
And as of the end of Q1, approximately 70% of our largest 10,000 customers have adopted VCF. As these customers consume VCF, we do see a further opportunity for future growth. As large enterprises adopt AI, they have to run their AI workloads on their on prem data centers, which will include both GPU servers as well as traditional CPUs. And just as VCF virtualizes these traditional data centers using CPUs, VCF will also virtualize GPUs on a common platform and enable enterprises to import AI models to run their own data on prem. This platform, which virtualize the GPU, is called the VMware Private AI Foundation.
And as of today, in collaboration with NVIDIA, we have 39 enterprise customers for the VMware Private AI Foundation. Customer demand has been driven by our open ecosystem, superior load balancing and automation capabilities that allows them to intelligently pull and run workloads across both GPU and CPU infrastructure and leading to very reduced costs. Moving on to Q2 outlook for software. We expect revenue of $6,500,000,000 dollars up 23% year on year. So in total, we’re guiding Q2 consolidated revenue to be approximately $14,900,000,000 up 19% year on year.
And this we expect this will drive Q2 adjusted EBITDA to approximately 66% of revenue. With that, let me turn the call over to Kiersten.
Kiersten Spears, Chief Financial Officer, Broadcom Inc.: Thank you, Hock. Let me now provide additional detail on our Q1 financial performance. From a year on year comparable basis, keep in mind that Q1 of fiscal twenty twenty four was a fourteen week quarter and Q1 of fiscal twenty twenty five is a thirteen week quarter. Consolidated revenue was $14,900,000,000 for the quarter, up 25% from a year ago. Gross margin was 79.1% of revenue in the quarter, better than we originally guided on higher infrastructure software revenue and more favorable semiconductor revenue mix.
Consolidated operating expenses were $2,000,000,000 of which 1,400,000,000 was for R and D. Q1 operating income of $9,800,000,000 was up 44% from a year ago with operating margin at 66% of revenue. Adjusted EBITDA was a record $10,100,000,000 or 68% of revenue above our guidance of 66%. This figure excludes $142,000,000 of depreciation. Now a review of the P and L for our two segments, starting with semiconductors.
Revenue for our semiconductor solutions segment was $8,200,000,000 and represented 55 percent of total revenue in the quarter. This was up 11% year on year. Gross margin for our Semiconductor Solutions segment was approximately 68%, up 70 basis points year on year, driven by revenue mix. Operating expenses increased 3% year on year to $890,000,000 on increased investment in R and D for leading edge AI semiconductors, resulting in semiconductor operating margin of 57%. Now moving on to infrastructure software.
Revenue for infrastructure software of $6,700,000,000 was 45% of total revenue and up 47 year on year based primarily on increased revenue from VMware. Gross margin for Infrastructure Software was 92.5% in the quarter compared to 88% a year ago. Operating expenses were approximately $1,100,000,000 in the quarter, resulting in infrastructure software operating margin of 76%. This compares to operating margin of 59% a year ago. This year on year improvement reflects our disciplined integration of VMware and sharp focus on deploying our VCF strategy.
Moving on to cash flow. Free cash flow in the quarter was $6,000,000,000 and represented 40% of revenue. Free cash flow as a percentage of revenue continues to be impacted by cash interest expense from debt related to the VMware acquisition and cash taxes due to the mix of U. S. Taxable income, the continued delay in the reenactment of Section 174 and the impact of corporate AMT.
We spent $100,000,000 on capital expenditures. Day sales outstanding were thirty days in the first quarter compared to forty one days a year ago. We ended the first quarter with inventory of $1,900,000,000 up 8% sequentially to support revenue in future quarters. Our days of inventory on hand were sixty five days in Q1 as we continue to remain disciplined on how we manage inventory across the ecosystem. We ended the first quarter with $9,300,000,000 of cash and $68,800,000,000 of gross principal debt.
During the quarter, we repaid $495,000,000 of fixed rate debt and $7,600,000,000 of floating rate debt with new senior notes, commercial paper and cash on hand, reducing debt by a net $1,100,000,000 Following these actions, the weighted average coupon rate and years to maturity of our $58,800,000,000 in fixed rate debt is three point eight percent and seven point three years respectively. The weighted average coupon rate and years to maturity of our $6,000,000,000 in floating rate debt is five point four percent and three point eight years respectively. And our $4,000,000,000 in commercial paper is at an average rate of 4.6%. Turning to capital allocation. In Q1, we paid stockholders two point eight billion dollars of cash dividends based on a quarterly common stock cash dividend of $0.59 per share.
We spent $2,000,000,000 to repurchase 8,700,000.0 AVGO shares from employees as those shares vested for withholding taxes. In Q2, we expect the non GAAP diluted share count to be approximately 4,950,000,000.00 shares. Now moving on to guidance. Our guidance for Q2 is for consolidated revenue of $14,900,000,000 with semiconductor revenue of approximately $8,400,000,000 up 17% year on year. We expect Q2 AI revenue of $4,400,000,000 up 44% year on year.
For non AI semiconductors, we expect Q2 revenue of $4,000,000,000 We expect Q2 infrastructure software revenue of approximately $6,500,000,000 up 23% year on year. We expect Q2 adjusted EBITDA to be about 66%. For modeling purposes, we expect Q2 consolidated gross margin to be down approximately 20 basis points sequentially on the revenue mix of infrastructure software and product mix within semiconductors. As Hock discussed earlier, we are increasing our R and D investment in leading edge AI in Q2 and accordingly, we expect adjusted EBITDA to be approximately 66%. We expect the non GAAP tax rate for Q2 and fiscal year twenty twenty five to be approximately 14%.
That concludes my prepared remarks. Operator, please open up the call for questions.
Conference Operator: Thank you. And our first question will come from the line of Ben Reitzes with Melius. Your line is open.
Ben Reitzes, Analyst, Melius: Hey guys, thanks a lot and congrats on the results. Hock, you talked about four more customers coming online. Can you just talk a little bit more about the trend you’re seeing? Can any of these customers be as big as the current three? And what does this say about the custom silicon trend overall and your optimism and upside to the business long term?
Hock Tan, President and CEO, Broadcom Inc.: Very interesting question, Ben, and thanks for your kind wishes. But what we think as and by the way, these four are not yet customers as we define it. As I’ve always said, in developing and creating XPUs, XPUs, we are not really the creator of those XPUs, to be honest. We enable each of those hyperscalers partners we engage with to create that chip and to basically to create that compute system, call it DevRel. And it comprises the model, the software model, working closely with the compute engine, the XPU and the networking that binds together the clusters, those multiple XPUs as a whole to train those large frontier models.
And so the fact that we create the hardware, it still has to work with the software models and algorithms of those partners of ours before it becomes fully deployable and scale, which is why we define customers in this case as those where we know they are deployed at scale and we have received the production volume to enable it to run. And for that, we only have three, just to reiterate. The four are, I call it, partners who are trying to create the same thing as the first three and to run their own frontier models, each of it on here, to train their own frontier models. And as I also said, it doesn’t happen overnight. To do the first chip could take would take typically one point five years and that’s very accelerated and which we could accelerate given that we essentially have a framework and a methodology that works right now.
It works for the three customers, no reason for it to not work for the four. But we still need those four partners to create and to develop the software, which we don’t do, to make it work. And to answer your question, there’s no reason why these four guys would not create demand in the range of what we’re seeing with the first three guys, but probably later. It’s a journey. They started it later, and so they will probably get there later.
Ben Reitzes, Analyst, Melius: Thank you very much.
Conference Operator: Thank you. One moment for our next question. And that will come from the line of Harlan Sur with JPMorgan. Your line is open.
Harlan Sur, Analyst, JPMorgan: Good afternoon and great job on the strong quarterly execution, Hock and team. Great to see the continual momentum in the AI business here in the first half of your fiscal year and the continued broadening out of your AI ASIC customers. I know Hawk last earnings you did call out a strong ramp in the second half of the fiscal year driven by new three nanometer AI accelerated programs kind of ramping. Can you just help us either qualitatively, quantitatively profile the second half step up relative to what the team just delivered here in the first half? Has the profile changed either favorably, less favorably versus what you thought maybe ninety days ago?
Because quite frankly, I mean, a lot has happened since last earnings, right? You’ve had the dynamics like DeepSeek and focus on AI model efficiency. But on the flip side, you’ve had strong CapEx outlooks by your cloud and hyperscale customers. So any color on the second half AI profile would be helpful.
Hock Tan, President and CEO, Broadcom Inc.: They’re asking me to look into the minds of my customers. And I hate to tell me, they don’t tell you, they don’t show me the entire mindset here. But why are we beating the numbers so far in Q1 and seems to be encouraging in Q2? Partly from improved networking shipments, as I indicated to foster those XPUs and AI accelerators, even in some cases, GPUs together for the hyperscalers. And that’s good.
And partly also we think there is some pull ins of shipments and acceleration, call it that way, of shipments in fiscal twenty twenty five.
Harlan Sur, Analyst, JPMorgan: And on the second half that you talked about ninety days ago, the second half three nanometer ramp, is that still very much on track?
Hock Tan, President and CEO, Broadcom Inc.: Helen, thank you. I only guide you too. Sorry. Let’s not speculate on the second half.
Harlan Sur, Analyst, JPMorgan: Okay. Thank you, Hawk.
Hock Tan, President and CEO, Broadcom Inc.: Thank you.
Conference Operator: Thank you. One moment for our next question. And that will come from the line of William Stein with Truist Securities. Your line is open.
William Stein, Analyst, Truist Securities: Great. Thank you for taking my question. Congrats on these pretty great results. It seems from the news headlines about tariffs and about DeepSeek that there may be some disruptions, some customers and some other complementary suppliers are seem to feel a bit paralyzed perhaps and have difficulty making tough decisions. Those tend to be really useful times for great companies to sort of emerge as something bigger and better than they were in the past.
You’ve grown this company in a tremendous way over the last decade plus, and you’re doing great now, especially in this AI area. But I wonder if you’re seeing that sort of disruption from these dynamics that we suspect are happening based on headlines of what we see from other companies. And how aside from adding these customers in AI, I’m sure there’s other great stuff going on, but should we expect some bigger changes to come from Broadcom as a result of this?
Hock Tan, President and CEO, Broadcom Inc.: You pose a very interesting set of issues and questions. And those are very relevant interesting issues. The only issue the only problem we have at this point is, I would say it’s really too early to know where we all land. I mean, there’s the threat, the noise of tariffs, especially on chips that hasn’t materialized yet, nor do we know how it will be structured. So we don’t know.
But we do experience and we are leaving it now is the disruption that is in a positive way, I should add, a very positive disruption in semiconductors on a generative AI. Generative
Ben Reitzes, Analyst, Melius: AI,
Hock Tan, President and CEO, Broadcom Inc.: for sure, and I said that before so at the risk of repeating here, but we feel it more than ever is really accelerating the development of semiconductor technology, both process and packaging as well as design towards higher and higher performance accelerators and networking functionality. We’re seeing that innovation that those upgrades occur on every month as we face new interesting challenges. And when particularly with XPUs, we’re trying within us to optimize to frontier models of our partners, our customers as well as our hyperscale partners. And we it’s a lot of I mean, it’s a privilege almost for us to be to participate in it and try to optimize. And by optimize, I mean, you look at an accelerator.
You can look at it for simple terms, high level to perform to want to be measured not just on one single metric, which is compute capacity, how many teraflops. It’s more than that. It’s also tied to the fact that this is a distributed computing problem. It’s not just the compute capacity of a single XPU or GPU. It’s also the network bandwidth.
It ties itself to the next adjacent XP or GPU. So that has an impact. So you’re doing that. You don’t have to balance with that. Then you decide, are you doing training or you’re doing pre feeling, post training, fine tuning.
And again then comes how much memory do you balance against that. And with it, how much latency you can afford, which is memory bandwidth. So you look at at least four variables, maybe even five, if you include in memory bandwidth, not just memory capacity when you go straight to inference. So we have all these variables to play with and we try to optimize it. So all this is very, very I mean, it’s a great experience for our engineers to push their envelope on how to create all those chips.
And so that’s the biggest disruption we see right now from SHIER trying to create and push their envelope on generative AI, trying to create the best hardware infrastructure to run it. Beyond that, yes, there are other things too that come into play because with AI, as I indicated, it does not just drive hardware for enterprises, it drives the way they architect their data centers. Data requirement keeping data private under control becomes important. So suddenly, the push of workloads towards public cloud may take a little pause as large enterprises particularly have to take to recognize that you want to run AI workloads, you probably think very hard about running them on prem. And suddenly, you push yourself towards saying, you got to upgrade your own data centers to do and manage your own data to run it on prem.
And that’s also pushing a trend that we have been seeing now over the past twelve months. Hence, my comments on VMware Private AI Foundation. It’s true, especially enterprises pushing direction are quickly recognizing that how where do they run their AI workloads. So those are trends we see today and a lot of it coming out of AI, a lot of it coming out of sensitive rules on sovereignty in cloud and data. On as far as you mentioning tariffs is concerned, I think that’s too early for us to figure out where to all land.
And probably maybe give it another three, six months, we’ll probably have a better idea where to go.
William Stein, Analyst, Truist Securities: Thank you.
Conference Operator: Thank you. One moment for our next question. And that will come from the line of Ross Seymore with Deutsche Bank. Your line is open.
Ross Seymore, Analyst, Deutsche Bank: Thanks for asking the question. Hock, I want to go back to the XPU side of things. And going from the four new engagements, not yet named customers, two last quarter and two more today that you announced, I want to talk about going from kind of design win to deployment. How do you judge that? Because there is some debate about tons of design wins, but the deployments actually don’t happen, either that they never occur or that the volume is never what is originally promised.
How do you view that kind of conversion ratio? Is there a wide range around it? Or is there some way you could help us kind of understand how that works?
Hock Tan, President and CEO, Broadcom Inc.: Well, each and Ross, interesting question, I’ll take the opportunity to say, the way we look at Designwing is probably very different from the way many of our peers look at it out there. Number one, to begin with, we believe Designwing when we know our product is produced in scale at scale and is actually deployed, literally deployed in production. So that takes a long lead time because from taping out, getting in the product, it takes a year easily from the product in the hands of our partner to when it goes into scale production, it will take six months to a year is our experience that we’ve seen, number one. And number two, I mean, producing and deploying 5,000 XPUs, that’s a joke. That’s not real production in our view.
And so we also limit ourselves in selecting partners to people who really need that large volume. You need that large volume from our viewpoint in scale right now in mostly training, training of large language models, frontier models in a continuing trajectory. So we limit ourselves to how many customers or how many potential customer that exists out there, Ross. And we tend to be very selective who we pick to begin with. So when we say design win, it really is at scale.
It’s not something that starts in six months and die or a year and die again. Basically, it’s a selection of customers. It’s just the way we run our ASIC business in general for the last fifteen years. We pick and choose the customers because we know this and we do multiyear roadmaps with these customers because we know these customers are sustainable. I put it bluntly, we don’t do it for startups.
Thank you.
Conference Operator: And that will come from the line of Stacy Rasgon with Bernstein Research. Your line is open.
Stacy Rasgon, Analyst, Bernstein Research: Hi guys. Thanks for taking my question. I wanted to go to the three customers that you do have in volume today. And what I wanted to ask was, is there any concern about some of the new regulations or the AI diffusion rules that are going to get put in place supposedly in May impacting any of those design wins or shipments? It sounds like you think all three of those are still on at this point.
But anything you could tell us about worries about new regulations or AI diffusion rules impacting any of those wins would be helpful.
Hock Tan, President and CEO, Broadcom Inc.: Thank you. In this era or this current era of geopolitical tensions and fairly dramatic actions all around by governments, Yes, always some concern at the back of everybody’s mind. But to answer your question directly, no, we don’t have any concerns.
Vivek Arya, Analyst, Bank of America: Got it. So none of those
Stacy Rasgon, Analyst, Bernstein Research: are going into China or to Chinese customers then?
Hock Tan, President and CEO, Broadcom Inc.: No comment. Are you trying to not see it?
Stacy Rasgon, Analyst, Bernstein Research: Okay. That’s helpful. Thank you.
Hock Tan, President and CEO, Broadcom Inc.: Thank you.
Conference Operator: One moment for our next question. And that will come from the line of Vivek Arya with Bank of America. Your line is open.
Vivek Arya, Analyst, Bank of America: Thanks for taking my question. Hock, whenever you have described your AI opportunity, you have always emphasized the training workload. But the perception is that the AI market could be dominated by the inference workload, especially with these new new reasoning models. So what happens to your opportunity and share if the mix moves more towards inference? Does it create a bigger TAM for you than the $60,000,000,000 to $90,000,000,000 does it keep it the same, but there is a different mix of product or does a more inference heavy market favor a GPU over an XPU?
Thank you.
Hock Tan, President and CEO, Broadcom Inc.: That’s a good question, interesting question. By the way, I never I do talk a lot about training. We do our chip our experience also focus on inference as a separate product line. They do. And that’s why I can say the architecture of those chips are very different from the architecture of the training chips.
And so it’s a combination of those two, I should add, that adds up to this $60,000,000,000 to $90,000,000,000 So if I had not been clear, I do apologize, it’s a combination of both. But having said that, the larger part of the dollars come from training, not inference within the service of the same that we have talked about so far.
Harlan Sur, Analyst, JPMorgan: Thank you.
Conference Operator: One moment for our next question. And that will come from the line of Harsh Kumar with Piper Sandler. Your line is open.
Ji Yu, Head of Investor Relations, Broadcom Inc.0: Thanks Broadcom team and again great execution. Just had a quick question. We’ve been hearing that almost all of the large clusters that are $100,000 plus they’re all going to eat that. I was wondering if you could help us understand the importance of when the customer is making a selection, choosing between a guy that has the best switch ASIC such as you versus a guy that might have the compute there. Can you talk about what the customer is thinking and what are the final points that they want to hit upon when they make that selection for the mix cards?
Hock Tan, President and CEO, Broadcom Inc.: Okay. I see. No, it’s a yes, it’s down to in the case of the hyperscalers now very much so, it’s very driven by performance. And its performance, what you’re mentioning about on connecting, scaling up and scaling out those AI accelerators, be they XPU or GPU among hyperscalers. And in most cases among those hyperscalers we engage with when it comes to connecting those clusters, they are very driven by performance.
I mean, if you are in a race to really get the best performance out of your hardware as you train and continue to train your Frontier models. That matters more than anything else. So the basic first thing they go for is proven. That’s a proven piece of hardware, it’s a proven system, subsystem in our case that makes it work. And in that case, we tend to have a big advantage because I mean networking are us, switching and routing are us for the last ten years at least.
And the fact that it’s AI just makes it more interesting for our engineers to work on. And but it’s basically based on proven technology and experience in pushing that and pushing the envelope on going from 800 gigabit per second bandwidth to 1.6 and moving on 3.2, which is exactly why we keep stepping up this rate of investment in coming up with our products where we take a total of five. We doubled the radix to deal with just one hyperscaler because they want high radix to create larger clusters while running bandwidth that are smaller, but that doesn’t stop us from moving ahead to the next generation of Tomahawk six. And I dare say we’re even planning Tomahawk seven and eight right now. And we’re speeding up the rate of development.
And it’s all largely for that few guys, by the way. So we’re making a lot of investment for very few customers, hopefully, with very large serve available markets. But if nothing else, that’s the big bets we are placing.
Ji Yu, Head of Investor Relations, Broadcom Inc.0: Thank you, Hock.
Conference Operator: Thank you. One moment for our next question. And that will come from the line of Timothy Arcuri with UBS. Your line is open.
William Stein, Analyst, Truist Securities: Thanks a lot. Hock, in the past you have mentioned XPU units growing from about 2,000,000 last year to about 7,000,000 you said in the twenty twenty seven, twenty twenty eight timeframe. My question is do these four new customers, do they add to that 7,000,000 unit number? I know in the past you sort of talked about an ASP of $20,000 by then. So those the first three customers are clearly a subset of that 7,000,000 units.
So do these new four engagements drive that $7,000,000 higher or do they just fill in to get to that 7,000,000? Thanks.
Hock Tan, President and CEO, Broadcom Inc.: Thanks, Tim, for asking that. To clarify, as I made I thought I made it clear in my comments. No, the market we are talking about including when you translate the unit is only among the three customers we have today. The other four we talk about engagement partners, We don’t consider that as customers yet and therefore are not in our served available market.
William Stein, Analyst, Truist Securities: Okay. So they would add to that number. Okay. Thanks,
Hock Tan, President and CEO, Broadcom Inc.: Bob. Thanks.
Conference Operator: One moment for our next question. And that will come from the line of C. J. Muse with Cantor Fitzgerald. Your line is open.
Ji Yu, Head of Investor Relations, Broadcom Inc.1: Yes. Good afternoon. Thank you for taking the question. I guess, I have to follow-up on your prepared remarks and comments earlier around optimization with your best hardware and hyperscalers with their great software. I’m curious how you’re expanding your portfolio now to six mega scale kind of frontier models will enable you to and won’t bless share tremendous information, but at the same time a world where these six truly want to differentiate.
So obviously the goal for all of these players is exaflops per second per dollar of CapEx per watt. And I guess to what degree are you aiding them in this efforts? And where does maybe the Chinese wall kind of start where they want to kind of differentiate and not share with you kind of some of the work that you’re doing? Thank you.
Hock Tan, President and CEO, Broadcom Inc.: We only provide very basic fundamental technology in semiconductors to enable these guys to use what we have and optimize it to their own particular models and algorithms that relate to those models. That’s it. That’s all we do. So that’s the level of a lot of that optimization we do for each of them. And as I mentioned earlier, there are maybe five degrees of freedom that we do and we play with that.
And so even if there are five degrees of freedom, there’s only so much we can do at that point. But it is and how they are they basically how we optimize it is all tied to the partner telling us how they want to do it. So there’s only so much we also have visibility on. But it’s what we do now is what the XPU model is. Shea optimization translating to performance, but also power.
That’s very important how they play. It’s not just cost. Power can translate into total cost of ownership eventually. It’s how design it in power and how we balance it in terms of the size of the cluster and whether they use it for training, pre training, post training, inference, test time scaling, all of them have their own characteristics. And that’s the advantage of doing that XPU and working closely with them to create that stuff.
Now as far as your question on China and all that, frankly, I don’t have any opinion on that at all. To us, it’s a technical game.
Ji Yu, Head of Investor Relations, Broadcom Inc.1: Thank you very much.
Conference Operator: One moment for our next question. And that will come from the line of Christopher Rolland with Susquehanna. Your line is open.
William Stein, Analyst, Truist Securities: Hey, thanks so much for the question. And this one’s maybe for Hawk and for Kirsten. I’d love to know just because you have kind of the complete connectivity portfolio, how you see new greenfield scale up opportunities playing out here between could be optical or copper or really anything and what additive this could be for your company? And then, Kirsten, I think OpEx is up. Maybe just talk about where those OpEx dollars are going towards within the AI opportunity and whether they relate?
Thanks so
Hock Tan, President and CEO, Broadcom Inc.: much. Your question is very broad reaching and our portfolio. Yes, we have the advantage and a lot of the hyperscale customers we deal with, they are talking about a lot of expansion. But it’s almost all greenfield, less so brownfield. It’s very greenfield, it’s all expansion and it all tends to be next generation that we do it, which is very exciting.
So the opportunity is very, very high. And we deploy I mean, we are both we can do it in copper, but what we see a lot of opportunity from is when you connect provide the networking connectivity through optical. So there are a lot of active elements, including either multimode lasers, which are called VCSELs or edge emitting lasers for basically single mode. And we do both. So there is a lot of opportunity just as in scale up versus scale up.
We used to do, we still do a lot of other protocols beyond Ethernet to consider PCI Express, where we are on the leading edge of that PCI Express. And the architecture on networking, switching, so to speak, we offer both. One is the a very intelligent switch, which is like our Jericho family with a DUM NIC or a very smart NIC with a DUM switch, which is the Tomahawk. We offer both architectures as well. So yes, we have a lot of opportunities from it.
All things said and done, all this nice white portfolio and all that adds up to probably, as I said in prior quarters, about 20% of our total AI revenue, maybe going to 30%. Though last quarter we hit almost 40%, but that’s not the norm. I would say typically all those other portfolio products still add up to a nice decent amount of revenue for us, but within the sphere of AI, they add up to, I would say, on average be close to 30% and XPUs that accelerate the 70%. If that’s what you’re driving at, perhaps that give you some shed some light on towards where how one matters over the other. But we have a wide range of products in the connectivity networking side of it.
They just add up though to that 30%.
William Stein, Analyst, Truist Securities: Thanks so much, Hak.
Kiersten Spears, Chief Financial Officer, Broadcom Inc.: And then on the R and D front, as I outlined, on a consolidated basis, we spent $1,400,000,000 in R and D in Q1, and I stated that it would be going up in Q2. Hock clearly outlined in his script the two areas where we’re focusing on. Now I would tell you as a company, we focus on R and D across all of our product lines so that we can stay competitive with next generation product offerings. But he did line out that we were focusing on taping out the industry’s first two nanometer AI XPU packaged in three d. That was one in his script and that’s an area that we’re focusing on.
And then he mentioned that we’ve doubled the radix capacity of existing Tomahawk five to enable our AI customers to scale up on Ethernet towards the 1,000,000 XTUs. So I mean that’s a huge focus of the company.
William Stein, Analyst, Truist Securities: Yes. Thank you very much, Kirsten.
Conference Operator: And one moment for our next question. And that will come from the line of Vijay Rakesh with Mizuho. Your line is open.
Vivek Arya, Analyst, Bank of America: Hi, Hock. Thanks. Just a quick question on the networking side. Just wondering how much goes up sequentially on the AI side? And any thoughts around M and A going forward?
Seeing a lot of headlines around
Hock Tan, President and CEO, Broadcom Inc.: the Intel Products Group, etcetera. Thanks. Okay. On the networking side, as I indicated, Q1 showed a bit of a surge, but I don’t expect that to be that mix of sixtyforty, 60 is compute and 40% networking to be something that is normal. I think the norm is closer to 70%, thirty % maybe at best 30%.
And so who knows what Q2 is, we cannot see Q2 as continuing, but that’s just in my mind a temporary blip. The norm will be seventy-thirty if you take it across a period of time like six months, a year to answer your question. M and A, no, I’m too busy doing AI and VMware at this point. We’re not thinking of it at this point.
Stacy Rasgon, Analyst, Bernstein Research: Thanks, Mark.
Conference Operator: Thank you. That is all the time we have for our question and answer session. I would now like to turn the call back over to Gu for any closing remarks.
Ji Yu, Head of Investor Relations, Broadcom Inc.: Thank you, Sherry. Broadcom currently plans to report its earnings for the second quarter of fiscal year twenty twenty five after close of market on Thursday, 06/05/2025. A public webcast of Broadcom’s earnings conference call will follow at two P. M. Pacific.
That will conclude our earnings call today. Thank you all for joining. Sherry, you may end the call.
Conference Operator: Thank you. Ladies and gentlemen, thank you for participating. This concludes today’s program. You may now disconnect.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.