Intel at Global Technology Conference: Strategic Shifts and AI Focus

Published 18/11/2025, 23:30
Intel at Global Technology Conference: Strategic Shifts and AI Focus

On Tuesday, 18 November 2025, Intel Corporation (NASDAQ:INTC) presented its strategic direction at the Global Technology, Internet, Media & Telecommunications Conference 2025. The discussion, led by John Pitzer, focused on Intel’s transformation efforts, AI strategy, and operational challenges. While Intel aims to boost market share and improve gross margins, it faces supply constraints and competitive pressures.

Key Takeaways

  • Intel is prioritizing a cultural shift towards an engineer-focused, customer-centric approach.
  • The company is navigating supply constraints, expecting them to peak in Q1 2026.
  • A significant $5 billion partnership with NVIDIA highlights Intel’s x86 ecosystem.
  • Intel’s AI strategy targets advancements in PC and server markets.
  • Gross margin improvements are a key focus, with long-term potential to match fabless peers.

Financial Results

  • Q4 2023 Gross Margin Guidance:

- Midpoint guided to 36.5%, a decrease of 350 basis points sequentially.

- 50 basis points decline due to Altera deconsolidation.

- 300 basis points decline from early ramp of Intel 18A, pricing actions, and supply constraints.

  • Server Demand:

- Strong demand despite ARM and AMD market share gains.

- Granite Rapids ramping well but limited by wafer supply.

  • Margin Improvement:

- Aiming for improvements through 2026 and beyond.

- Long-term potential to be industry comparable with fabless peers.

Operational Updates

  • Panther Lake Launch:

- First SKU expected by year-end, with yield improvements on track.

  • Intel 14A:

- Actively seeking external customers; development could halt without a material customer.

  • NVIDIA Partnership:

- $5 billion investment to close by year-end.

- Custom Xeon part to be integrated into NVIDIA’s data center system.

  • Server Roadmap:

- Revisions underway to address competitive challenges.

Future Outlook

  • Cultural Transformation:

- Emphasizing an engineer-focused and customer-centric approach.

  • CCG and DCAI Roadmaps:

- Aiming to drive market share gains and improve gross margins.

  • AI Strategy:

- Developing inference-specialized GPUs for agentic and physical AI.

  • Foundry Business:

- Targeting profit break-even on a run-rate basis by 2027.

Q&A Highlights

  • NVIDIA Partnership Workloads:

- Intel to provide a custom Xeon part for data centers; NVIDIA to handle integration and go-to-market strategy.

  • AI Strategy:

- Focus on inference-specialized GPUs for agentic and physical AI.

  • ASIC Strategy:

- Pursuing x86 or ARM-based ASICs, leveraging Intel’s ecosystem and system know-how.

  • Server Margin Depression:

- Efforts to become more cost-efficient amid depressed data center margins.

Readers are encouraged to refer to the full transcript for a comprehensive understanding of Intel’s strategic plans and insights shared during the conference.

Full transcript - Global Technology, Internet, Media & Telecommunications Conference 2025:

Neil Pajuri, Semi-Analyst: Neil Pajuri, I’m the semi-analyst here. And we’re delighted to have Intel today, John Pitzer, who heads the IR team there, and also the CVP of Treasury. Before we start, John asked me to read a forward-looking statement, so I’m going to try this. Before we begin, please note that today’s discussions may contain forward-looking statements that are subject to various risks and uncertainties, and may reference non-GAAP financial measures. Please refer to Intel’s most recent earnings release and annual report on Form 10-K and other filings with the SEC for more information on the risk factors that could cause actual results to differ materially, and additional information on Intel’s non-GAAP financial measures, including reconciliations where appropriate to the corresponding GAAP financial measures. Okay, that was pretty good.

Neil Pajuri, Semi-Analyst: Thanks for that, Trinidad. I appreciate that.

Yeah, that wasn’t easy, but hey, made it. Okay, yeah, thanks for joining, John. So, you know, it’s been, what, eight months since Libu joined, and a lot of changes. You know, gone through a major restructuring. I would say the past three months, a lot of news flow, mostly positive. Maybe, you know, start with that. Now that, you know, the restructuring is done, the balance sheet is in a pretty good shape, and also you got into a partnership with NVIDIA, you know, that certainly has been quite positive. Let’s talk about, you know, what’s on the plate, you know, for the management for as we look out to the next 6 to 12 months. Maybe, you know, we can talk about what are the top two or three priorities for the management going forward.

Yeah, it’s a good question. By the way, thanks for inviting me, and appreciate the time this afternoon with everybody in the room. I mean, listen, I think at our core, what Libu is trying to drive as far as the transformation is really cultural. The top priority, there’s a lot that we’re trying to get done at the business unit level, but it all really is built on a foundation of getting the culture right. I think Libu has talked about this in several settings. He really wants to get Intel back to being an engineer-focused, customer-centric organization. I think, you know, as part of the restructuring, we’ve done, I think, done a lot to really simplify the organization, take bureaucracy out, you know, drive, I think, better, faster decision-making. I don’t think you’re ever done getting culture right.

I think if Libu were here, he would still stress that getting the culture right is really the foundation of changing a lot of what we need to change at the business unit level. Now, I know that was not your question. Your question really was at the BU level, what are we focused on? I think priority number one is really getting a good and successful launch of Panther Lake, which also coincides with 18A yields and yield improvement. Feel really good about the fact that we are going to get our first SKU out by end of year. I will put a plug in for people in Vegas in January at CES. You are going to hear the CCG team talk a lot about Panther Lake, and ROEM partners talk a lot about Panther Lake, and we are really excited about that launch.

I think sticking with Intel Foundry, clearly landing an Intel 14A external customer is pretty critical over the next, call it 6 to 12 months. I’m sure you’ll have some questions around that. As we pivot to sort of the business units, I think job number one is to really, I think, establish a roadmap in both CCG and DCAI that drives both market share gains and gross margin improvement. I know you’ve got some questions there, so I won’t elaborate. Finally, it’s really prosecuting our accelerator strategy in AI that’s both on the ASIC side of the house and on the GPU side of the house.

Okay, that’s a good start. Maybe, you know, we can start with the NVIDIA partnership because that gets a lot of attention. So, you know, NVIDIA is investing $5 billion in Intel. At the same time, you’re going to be part of NVLink Fusion. Maybe, you know, talk about what your current position is when it comes to AI, headnote, CPUs, and how this might actually change it, and maybe enhance it. And also, if you can kind of touch upon what sort of workloads are we talking here? Because NVIDIA seems to be doing quite well with their own ARM CPU, and now they’re entering a partnership with you. So how do you see that opportunity? Are there different workloads that, you know, x86, you know, target? Just anything you can tell us about the timing and the potential opportunity of.

Yeah, it’s a really good question. We couldn’t have been more pleased to make the two announcements we made with NVIDIA in the calendar third quarter. One was the collaboration both on data center and client. The other was the $5 billion investment, which we hope to close by year-end. Relative to the collaboration, you know, I think it was really important to get that done. That was the culmination of over a year’s worth of work. There was a lot of back and forth between both companies at the core engineering level to really drive that collaboration. I think it really is an endorsement of at least two things. One, really the criticality of the x86 ecosystem.

As much as there are new applications around AI, you know, AI still fundamentally sits on top of and leverages a compute architecture that’s been x86 for the last 40, 50 years. I think NVIDIA saw value in getting locked into that x86 ecosystem. The other really important endorsement, this is a multi-generational agreement. You can imagine the amount of tire kicking that the engineers at NVIDIA did looking at our roadmaps, both in client and in data center over multiple generations. Quite frankly, they like what they saw. Now, to your specific question, you know, what does it do for us? I mean, clearly today we’ve got a pretty strong position in headnote CPUs going into AI accelerated servers.

As NVIDIA did more with both Grace and what will be Vera, you know, there were some questions that investors were asking about our ability to sustain that position. I think the disadvantage we had prior to this collaboration is that both Grace and Vera, I think, utilize, you know, the NVLink integration that we now have with this collaboration with NVIDIA. The way the data center side of the relationship is going to work is we will provide them with a custom Xeon part that they will then integrate into their system, and they will have the responsibility of going to market, and we’ll get all the benefits of having that NVLink fabric with our custom Xeon. If you look at the client part, I think that clearly we have the opportunity to build really a new class of PC parts that we’re pretty excited about.

The way that that relationship is going to work is, you know, they will provide the graphics tile through bailment, which means that the customer will actually pay them for the graphics tile, but we will be responsible for integrating that graphics tile with our CPU and bringing it to market. The reason why we have this bailment agreement is we did not want to have the same economics around this NVIDIA collaboration that we have today, for example, with Lunar Lake, where we have the embedded memory, which is really revenue at zero calorie gross margin. That is why it works best for us to move down this bailment path. We are pretty excited. We think that it is TAM expansive both in the data center and in the PC market. We are looking forward to getting product out to market as quickly as possible.

Yeah, it makes. Just to be clear, this is a custom CPU that you’re going to sell directly to NVIDIA. This is not going to a third-party hyperscaler.

That’s correct. They will integrate it, and they have the go-to-market responsibility at the system level.

Got it. You have not talked about the timing or potential.

We have not. I mean, I think clearly this is a key focus. I actually think that Libu and Jensen are having meetings every other week with the team to do deep dives. Our intent is to get this to market as quickly as possible, but we have not set a timeline.

Got it. Got it. On the PC side, client side, I mean, you have your own graphics development, right? Today you sell a lot of integrated GPUs. I mean, when you say you’re going to kind of package NVIDIA’s RTX into Intel CPUs, how does that mean? Are you targeting a particular market with NVIDIA solutions, or what happens to your own internal development?

Yeah, I think we’re going to continue to pursue our own internal strategy. Just like on the data center side, NVIDIA will continue to pursue their ARM strategy with Grace and Vera. You know, time will tell as to what portion of the market this will actually cover, but we are going to be able to bring, I think, a new level of performance on graphics to a notebook-type class PC, clearly initially targeting the high end, but there are aspirations that we can broaden the market further as we develop this relationship more.

Okay, that makes sense. Maybe switching gears a little bit here. I mean, you know, talking about the overall AI strategy for Intel, right? On one hand, you just mentioned there’s a lot of opportunity on the x86 CPU itself. And then I guess on the PC market, you guys talked about, you know, potentially shipping 100 million, you know, AI PCs this year. At the same time, you know, when it comes to XPUs, it’s a, you know, supposedly a trillion-dollar market, and you’re missing out on that. What’s the strategy? I guess, you know, where do you think you can intersect that market, and what’s the approach internally? Maybe, you know, anything you can tell us about.

Yeah, I’m glad you asked the question in the way you did, because oftentimes we over-rotate to our AI strategy being really our accelerator strategy. And to your point, you know, AI is driving a lot of what we’re doing in the PC market today and in the traditional server market. It’s also a big sort of driver in what’s going on with the fab business on the wafer side and the advanced packaging side, which I’m sure you’ll ask about later. But specifically on the accelerator side, you know, I think that we’re still in the process of kind of bringing our strategy fully to market. Libu has talked about the idea that we’re looking at really an inference-specialized GPU to go after the inference part of the market.

If you look at sort of what’s going on in the hyperscale training market, we think that market is already well served with NVIDIA and with ASIC. It is really the opportunity to go after agentic AI, physical AI, and really optimized for inference is what we’re targeting.

Okay, that makes sense. There was also a mention of, you know, customization and ASIC on the last earnings call. Just kind of, you know, level set, are we talking, you know, ASICs similar to what Broadcom and Marvell are doing, or is this something different? What’s the approach here?

Yeah, I think we are talking similar to what Broadcom and Marvell are doing today. I like to remind people we’re actually an ASIC company today. We have ASICs, multiple wins, more in networking than not, that now fall under Srini’s purview in the central engineering group. ASICs are not new to us. We have a lot of the building blocks we need to be a broader participant here. They haven’t just been optimally managed. I think that’s one of the reasons why Libu brought Srini in from Cadence to really.

Not me, right?

Not you. Just to be clear, different Srini, to come in and really drive that business. This will be across x86. It will also be in for accelerators. Quite frankly, it is sort of agnostic. Could it be x86? Yes. Could it be ARM-based? Absolutely. Does it need to be built in our fabs? Could be. Does it have to be built in our fabs? No. I think that there is a real opportunity here. Quite frankly, one of the first things that Libu did when he joined the company as CEO was go out on a listening tour to customers. One of the things he identified is that while customers are doing a lot of ASIC activity, they are not fully satisfied with the suppliers that they have today. We think we have a real opportunity here. It will take time.

I want to be very clear, we are on a journey, but we’re pretty optimistic about the assets that we bring to this market.

What do you think is the differentiator here? Obviously, you know, Broadcom would argue that they’ve been doing this for almost 30 years since Avago. I mean, they’re still Avago. And Marvell, you know, they come from IBM legacy. What is that Intel brings to the table that the market, you know?

I think the two biggest benefits we have is one, the x86 ecosystem that we’ve been investing in for decades and decades. There’s clearly importance to that. I mean, even if you look at the ARM server market today, ARM has been very successful for internal workloads at hyperscalers. It’s been significantly less successful for some of the external workloads. No reason why there couldn’t be an x86-based ASIC to try to address that market for some of the hyperscalers. I think the other real advantage we have here is just system know-how. I think that that’s going to become critically more important as you start thinking about some of the next generation products that these hyperscalers want to bring to market.

Right, right. I mean, there’s been a lot of chatter about, you know, even companies like Broadcom potentially doing rack-level solutions, right? I mean, you obviously have done historically some reference designs on that front, but do you feel that you have all the pieces, you know, that, you know, go into building a rack system at this point, or do you think you need to acquire any of those?

Yeah, it’s a good question. I think that, you know, we have a lot of the IP blocks that we need. It doesn’t mean that we have them all. It doesn’t mean that we have to acquire. It could be that we license. Stay tuned on that front. Again, I want to be just clear, we’re excited about this opportunity, but it will take some time for us to go off and execute.

Got it. You know, obviously we talked about balance sheet being pretty healthy right now. There was some speculation in the market about potential M&A. I do not expect you to comment on that, but I guess, you know, would you completely rule out M&A, you know, as part of your AI strategy? Because it looks like there are, you know, enough, you know, startups out there, enough solutions out there that could be quite interesting.

Yeah, it’s a good question. Again, I’m not going to address any of the specific speculation that might be in the marketplace today, but I think clearly there are a lot of strengths that Libu brings to the role as CEO, his turnaround at Cadence, his relationships across customers, suppliers, and quite frankly, competitors, and his knowledge of the ecosystem through his investments at Walden. You know, I think that when the board chose Libu to be CEO, they chose him because they wanted to leverage all of those strengths. As you think about our M&A strategy, I wouldn’t rule it out. I think quite frankly, tuck-ins could make some sense. I also think partnerships could make some sense. You know, one of the things that I think Libu and Dave have plenty of is pragmatism.

I think they’re going to be very pragmatic about how we go off and prosecute this strategy.

Makes sense. And then just switching gears a little bit to the, you know, the near-term server demand, it’s been pretty healthy despite, you know, the concerns about ARM gaining share or AMD gaining share, et cetera. You talked about supply constraints, et cetera. But talk to us about, you know, the roadmap on the server side. AMD hosted their analyst day, and they gave us some targets, you know, about market share. They’re targeting more than 50% share. So I’d love to hear your thoughts on, you know, how you think about it, you know, not just, you know, this quarter, next quarter, but as we look out to the next two to three years, you know, what’s the strategy to kind of stop those share loss and maybe potentially regain some share?

Yeah, we’ve been pretty front-footed and clear on this. We’re not where we want to be competitively on our server roadmap, and we have work to do. I’m sure you saw that we hired a new head of the data center group recently, Kevork, coming in from Qualcomm, prior to Qualcomm NXP. I think he brings a lot of strengths and knowledge from transistor to SoC to system, which is going to be important as we rebuild sort of the roadmap on that, on the product front. Having said that, I also want to make sure that it’s clear that we say server market as if it’s one market. There are a lot of different segments within the server market today. Quite frankly, we’ve been very pleased by the ramp of Granite Rapids. It’s still early in the ramp.

Quite frankly, if we had more wafers, we’d have more revenue. We are going through a pretty tight supply situation. I think as you think about the roadmap from here, one thing I think it’s important to highlight is, as you know, these product design cycles take multiple years. As Libu came in the door in March of this year, a lot of Diamond Rapids was already baked from a design perspective. Clearly, I think he and Kevork are trying to make tweaks around the edges to do better there. I think as you think about the different sort of flavors of Diamond Rapids, at the high end, we feel pretty good about where we are competitively.

Really, and I think Libu talked about this a little bit on the Q3 earnings call, it’s really Coral Rapids where I think we have the opportunity to really have sort of a clean sheet approach and implement some of the key, I think, IP blocks that Libu and Kevork think are necessary to bring out competitive product.

Got it. Maybe we can touch upon a little bit on the margin side of things. You know, data center margins, if you kind of compare, client margins are doing actually, you know, pretty well. On the other hand, data center is actually somewhat depressed versus history. What is driving it? Is it more on the cost issue, you know, when it comes to wafers, or is it more R&D spending? Just any color on that, I think will be.

Yeah, I mean, listen, I would be very clear. I don’t think we’re satisfied where our margins are for either client or for data center today or for the overall company. And so we’ve got work to do. I think we’ve got sort of a plan in place to show improving gross margins as we go throughout 2026 and beyond. You know, having said that on the data center front, you know, I think that as we’ve been trying to catch up on performance, we’ve been over-rotating to performance over cost. We haven’t really been as cost-efficient as we could be. I think that’s one of the, I think, muscles that Libu is bringing back to the organization. This notion that in order to be competitive, it’s not just being competitive on performance, you have to be competitive on cost.

We think that, you know, our PC roadmap starts to get there with Panther Lake, even more so with Nova Lake. As I pointed out earlier, it’s going to take some time on the data center side.

Right, right. Yeah. Just to follow up on the margin front, you know, you did pretty well in the quarter. You did, you know, 40% better than expected, but your guidance, you know, suggests three, four points of decline sequentially. You made some comments about next year, you know, what are the puts and takes? Maybe we can kind of revisit them, right? On one hand, the demand is very strong. You’re supply constrained. I think the pricing is generally pretty healthy. On the other hand, you have new products ramping, and you also have some mixed headwinds. As we think about the next, you know, four, six quarters, you know, what are the two, three, you know, puts and takes, you know, that are going to impact the gross margins?

Yeah, really good question. I’ll start by reiterating, we only guide for one quarter out. To your point, at the midpoint of our revenue, we did guide gross margins to about 36.5% in Q4, which would be down about 350 basis points sequentially. Let me kind of deconstruct that for the audience. About 50 basis points of that is just taking Altera out of the numbers. Now that we’ve completed the stake sale to Silver Lake, we’ve deconsolidated that. Gross margins at Altera were above corporate average. So 50 of the 350 is Altera coming out. Of the other 300 basis points, I would say it’s plus or minus equally sort of distributed amongst three things. One, the early ramp of Intel 18A, which is always pretty expensive, especially as we’re just getting wafers out of Oregon.

It won’t be until the beginning of next year that we start to get wafers out of Arizona with a much better cost structure. It is pricing action that we’re taking on Arrow Lake and Lunar Lake to kind of navigate through this tight supply situation. I think the tight supply situation is going to be here for a bit. You know, we talked about on the earnings call, Q1 being the peak of tight supply, but it will persist beyond Q1. As we try to navigate through this, we are taking actions that are both gross margin creative and create incremental gross margin headwinds. On the gross margin creative side of things, you know, we are favoring sort of servers over PCs. You know, we’re probably de-emphasizing the low end of the PC market.

We are raising pricing on parts in 10 and 7 nanometer Raptor Lake because of the tight supply situation. On the other end of the spectrum, because we know we’re shorting the market and we’re trying to do what’s right for our customers, we are bringing price points down on both Lunar Lake and Arrow Lake to fill different parts of the PC stack so that we don’t undership the market by too much. Those are sort of the puts and takes. I would tell you of all the things, probably the most meaningful is Lunar Lake. As you know, we’ve got the embedded memory in Lunar Lake, and that does create some gross margin challenges.

I would say at the end of Q2, we would have been pretty confident telling you that Lunar Lake volumes probably were going to peak in Q3, be flattish sequentially in Q4, and then start to decline after that. Now, with some of the demand shaping that we’re doing, you know, Lunar Lake’s absolutely going to grow sequentially in Q4. It’s absolutely going to be up next year, year over year. That does create some incremental challenges, but we think that’s the right trade-off as we try to do the right things for our customers.

It makes sense. When do you think Lunar Lake will kind of peak out in terms of, you know, as % of the overall mix?

I want to be clear. Lunar Lake is a very small portion of the mix. But given the embedded memory and the cost of that embedded memory, you do not need it to be a significant part of the mix to have a detrimental impact on margins.

Right. There are no supply constraints on the Lunar Lake front because it’s mostly.

It’s a good question. I mean, we clearly talked on our Q3 earnings call that we are having supply constraints, but I also think that there are supply constraints in general across the industry, whether that be substrates with T-Glass or, quite frankly, some of the memory concerns that have popped up of late. I think TSMC has done a fantastic job as a supplier getting us wafers, but they’re tight as well.

Okay. When you talk about supply constraints, we’re not just talking about Intel 10, 7, which is in-house, but also the wafers that are coming from TSMC are tight as well.

I think we have better availability of supply there because of some of the decisions we’re making. Remember, we are actively moving our internal supply away from PCs towards servers, in large part because we’re undershipping the server market by a wider margin than the PC market. We want to make sure that we capture that opportunity.

Got it. Got it. You know, obviously Panther Lake and the yields, 18A, there’s a lot of focus on that. Maybe, you know, looking at history when you launch a new product like Panther Lake, how long does it take before that, you know, product kind of, you know, becomes, I guess, higher % of the overall mix? When does that crossover happen?

Yeah, it does take, I think, longer than people suspect. What I will tell you, because we haven’t been that explicit, as you look at the expected ramp in Panther Lake next year, and remember, Panther Lake is just a notebook part. If you compare that to the last two notebook launches that we had with Arrow Lake and Meteor Lake, you know, there’s nothing unusual about the Panther Lake ramp as a % of the mix. We clearly want to do better on the gross margin side. I think what’s important is when Libu rejoined in March, he was unsatisfied by yields, and he was unhappy that the progress on yields was sort of erratic. I think one of the things that’s changed dramatically over the last seven or eight months is we now have a predictable path for yield improvement.

You know, we’ve talked about in the past that the industry average yield improvement on a new ramp is about 7% per month. We are now on that curve for Panther Lake, which is giving us some confidence as we launch the product this quarter. Like I said, if you go to CES in January, you’ll hear a lot more about that.

Okay. And then, you know, just to kind of follow up on that margin comments, I guess on the earnings call, you sounded optimistic about, you know, margins improving through all of next year, maybe by end of next year, kind of getting to a, I do not know, creative level. I think that was a term used. So I guess, you know, it depends on the yields, and you talked about them right now, but longer term, given that this is, you know, a new process for you and assuming that the yields shake out where you expect them to be, how should we think about, you know, your longer term margin potential here? I mean, I do not know when the last time you guys gave us a long-term model, but are we talking a five handle in front of gross margin or is it a six handle?

How should we think about it?

Yeah, it’s a good question. Let me first back up a little bit on the lead up to your question. When we think about Panther Lake, absolutely, as we go through next year, increasing volume will help us come down the cost curve. You know, I’ll remind you that initially on any new process, we take wafers from Oregon. Oregon is where we do all of our technology development and then move into quasi-high volume manufacturing. Those wafers tend to be pretty expensive. Most, if not all, of the Panther Lake wafers this year are coming from Oregon. As we transition into Q1, you’ll start to see wafers coming in from Arizona as a much better and different cost structure, and that ramps throughout the year.

As far as the more important part of your question, the second half of your question on the long-term gross margins, I’m not going to get ahead of Libu putting out a financial model. You know, we’ve discussed internally about when’s the appropriate time to have an investor day. Stay tuned on that front. My guess is it’s going to be sometime in the second half of next year. Clearly, there’s nothing in sort of the way we’re thinking about our business that wouldn’t suggest that we should have industry comparable gross and operating margins with our fabless peers. Quite frankly, with the margin stacking of being an IDM, you know, we should do a little bit better than that.

Okay. Definitely look forward to the analyst day. I want to switch gears to the foundry side. Obviously, you know, it’s a challenging business. And, you know, I think you talked about 18A potentially, you know, leading to a break-even point at some point. I think in the past, you said maybe 2027. I don’t know if that’s still valid or not. Also, you know, you made, I think since Libu took over, there’s also a comment that, you know, if you don’t get a material customer for 14A, you might actually stop process development. My question is, is 2027 still the target for foundry break-even?

Yeah. So what we’ve said historically is that the ramp of 18A, mostly on the back of internal products, should be able to drive Intel foundry to be operating at a profit break-even on a run rate basis exiting 2027. That is clearly where Naga is still driving the organization. I will give you an important caveat, though. When we win a customer for Intel 14A, we will have to layer on expenses well ahead of getting revenue. I do think, you know, for transparency purposes, as 14A sort of customer traction materializes, it’s likely to push out that end of 2027. I’m thinking, though, most investors will be okay with that because it will be confirmation that we can actually stand up an external foundry.

Yeah, it’s probably a good problem to have. We’ve got a couple more minutes. I want to see if there are any questions from the audience. Okay. We’ll keep going. Again, you know, the 14A process node comment that caught, you know, a lot of attention, right? At the same time, you also said until 2030, you’re good with 18A, 18AP for your internal products. You don’t have to, you actually don’t need 14A for internal products, right? Talk to us about, you know, if you were to hypothetically give up 14A development, what does that do to your, I guess, OpEx or CapEx? How does that impact the business model?

Yeah, I always hate the hypothetical questions because they only get me in trouble. I want to be very clear. We are all in on the development of Intel 14A, and we’re feeling good with the early engagements we’re having with external customers. I do think it’s important to point out that 14A is, in many ways, a very different node from an external perspective than Intel 18A. I mean, I think simplistically, and I’m sure that the teams in TD are going to give me a hard time when I get back to the office, but I always think about any node having three phases. There’s sort of the definitional phase, the development phase, and then the high-volume manufacturing phase. On 18A, in the definitional phase, we were only engaging with Intel products.

It really wasn’t until the development phase that we actually started soliciting feedback from external customers, which meant that a lot of the choices we made at the transistor level was really to optimize for the internal product groups instead of external customers. In addition, it was really our first foray into understanding PDKs, process development kits, and we had some growing pains on getting our PDKs to be true industry standard. I think the big difference on Intel 14A is that we are in the definitional phase engaging with external customers. What that means is we’re getting earlier, more, and better feedback on how we’re doing from those external customers at 14A than we did at 18A. Our PDK maturity is much better. We are now bringing to market industry standard PDKs, both of which help tremendously.

I’d also point out that at 18A, we were changing from FinFET to Gate All-Around. We were also adding backside power. We were making major changes. At 14A, it’s a second-generation Gate All-Around. It’s a second-generation backside power. We have stated and been very clear, if you look at where we are today on 14A on performance and yield versus a similar point of development on 18A, we’re significantly further ahead on 14A. We are feeling very good about 14A. To answer your question, on the Q2 earnings call, when we first introduced the risk factor around 14A, Dave did mention that maintenance CapEx was sort of that high single digits billions number. That’s the way you should think about our capital spending as we get through the 18A capacity build-out if 14A weren’t going to happen.

I want to be very clear, we are all in on 14A.

Got it. That’s all the time we have. Thanks, John. Appreciate it.

Thank you.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers
© 2007-2025 - Fusion Media Limited. All Rights Reserved.