Trump announces trade deal with EU following months of negotiations
On Tuesday, 01 April 2025, Cisco Systems Inc. (NASDAQ: CSCO) participated in the OFC 2025 conference, sharing insights into its optical business strategies. The discussion, led by Bill Gartner, highlighted strong demand from hyperscalers and the company’s projected $1 billion in AI-related business this fiscal year. While Cisco’s partnership with NVIDIA and advancements in optics are promising, challenges remain in manufacturing and market transitions.
Key Takeaways
- Cisco’s AI-related business is expected to reach $1 billion this fiscal year, with half from optics.
- Hyperscalers drive demand, though it remains volatile and "lumpy."
- Co-packaged optics (CPO) are a focus, but manufacturing challenges persist.
- Cisco’s collaboration with NVIDIA aims to enhance enterprise AI solutions.
- Pluggable transceivers remain a robust market despite CPO’s rise.
Operational Updates
- Cisco’s optics business is thriving, driven by demand for ZR optics in AI applications.
- The company is developing a 1.6 terabit PAM4 DSP for data centers.
- Cisco maintains a multi-vendor optics strategy, developing some optics in-house and sourcing others.
- The rigorous qualification process ensures high reliability and customer confidence.
Future Outlook
- Co-packaged optics are gaining interest, particularly in dense hyperscaler applications.
- The potential migration of coherent optics into data centers could address higher bit rate demands.
- Cisco’s AI HyperFabric solution integrates optics and switching for enterprise AI infrastructure.
Q&A Highlights
- Demand from hyperscalers is strong but inconsistent, affected by product transitions and supply chain issues.
- Cisco’s Silicon One technology helps drive optics sales, even in third-party equipment.
- The Acacia acquisition and Cisco’s optics business are crucial to its success across various sectors.
For a detailed understanding of Cisco’s strategies and market insights, refer to the full conference call transcript below.
Full transcript - OFC 2025:
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: Hey, folks. Thank you very much for joining us. We’re coming to you live today, from the Optical Fiber Conference, OFC. This is Simon Leopold, Raymond James, data infrastructure analyst. And I’m pleased today to be hosting Bill Gartner of Cisco who runs the, optical, business unit over there.
We’ve got, some prepared questions. We’re gonna go through in sort of a fireside chat format today. Bill, I think, will will have a lot of interesting, thoughts on what’s going on in the industry. He’s been around for a bit. So, Bill, why don’t we, start off just to set
Bill Gartner, Runs the, optical, business unit, Cisco: a little bit of context for for our audience so folks understand your scope of responsibility and your fit within Cisco. Yeah. Thanks, Simon. First of all, thank thank you for having me. And, let me just start with a forward looking statement that I’m I’m encouraged to make by our investor relations team, and that’s that I will be making forward looking statements, our actual results may differ materially from those forward looking statements and are subject to the risks and uncertainties found in our most recent 10 k and 10 q.
With that, I have responsibility for really, you can think of this three separate businesses within Cisco. One is the optical systems business. That’s the traditional DWDM business that’s used to carry signals over long distances across a city, across a country, or subsea, typically using chassis based solutions that include ROADMs and amplifiers. Second business is our optics business, which are the transceivers that that are used by switches and routers inside a data center or inside a central office or in a campus environment, typically, you know, less than 10 kilometers for those applications. And then I have responsibility for Acacia, which was an acquisition we completed just about four years ago.
And Acacia provides the underlying technology for our optical systems as well as, pluggable coherent technology that we use in, in many applications, and we’ll be talking more about that.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: So great. So, Bill, the the trade show is just about to kick off. We attended, the executive forum yesterday. So what do you think will be the the hot topics at the show? And don’t simply say AI.
We we wanna get a little bit deeper than that, but, what do you see as the hot topics, and what what will Cisco be highlighting?
Bill Gartner, Runs the, optical, business unit, Cisco: Yeah. So, certainly, AI is is sort of the overarching theme behind a lot of the things that are driving capacity in the in customer networks, whether it’s hyperscalers or service provider, and ultimately enterprise networks as well. I think one of the hot topics, is is co packaging. People seem to be very that is that is sort of renewed interest around co packaging, and we can talk a bit about our views on that. Cisco has been very deliberate in advancing the idea that that pluggable coherent optics can replace transponders in many many network applications, including data center interconnect, metro, and now long haul.
And and we’re we’re showcasing a 400 gig ultra long haul optic that can be used in applications up to 3,000 kilometers as well as 800 gig and 800 gig z r plus optics that that really advance the state of the current 400 gig optic used in a in a data center and metro applications. We’re also showcasing a a new optical line system that’s really optimized around metro applications, optimized around point to point metro applications, leveraging our n c s ten fourteen chassis that has historically been used to host transponders. We’ve now expanded that to include line system components like amplifiers, and mux, demux. And we we’ll also be showcasing, client optics that are optimized for AI applications. So 400 gig and 800 gig client optics really targeting AI applications.
So I definitely wanna get into, some of the more technical discussions, but let’s start off with a little bit about what’s going on maybe near term in the marketplace. Mhmm.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: There’s there’s been a lot of noise about the AI builds by the large hyperscalers being lumpy, some projects being deferred, what’s being slowed down, accelerated. What what is your take from somebody in the trenches as to the level of activity? Well, let me
Bill Gartner, Runs the, optical, business unit, Cisco: say for the first half of our fiscal year, which began August 1, the the demand has been exceptionally strong and mostly driven by hyperscalers. And I would say that I would characterize that demand as as very lumpy. It comes and goes, and we’ve seen huge upswings and huge downswings in that demand over time. This this most recent period was almost all upswing. We do see some of some of the hyperscalers take a take a breath at this point.
But overall, in aggregate, I’d say the demand is still very high. They are building out, data centers and and, crossing data centers for AI applications, which has an impact on things like our ZR optics, at a rate that we’ve just never seen before. So we are pretty bullish on the near term, prospects for continued growth. But I would say we’re also cautious. You know, we’ve been caught up in cycles before where things just slow down without any particular explanation.
So the real challenge for a company like Cisco is, you know, how much inventory do you build and carry in anticipation of that demand continuing to grow and being ready when when that demand is there, versus, you know, scaling back and and trying to take
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: a more conservative position. And and maybe this is a little bit out of scope for you, but just wondering if you’ve got thoughts on what’s creating lumpiness. Is it a a product transition where we know NVIDIA is going from Hopper to Blackwell, or is it more sort of macroeconomic questions, or is it simply managing large projects takes time?
Bill Gartner, Runs the, optical, business unit, Cisco: I think it’s all of the above. In some cases, it’s a a given customer may be waiting for fiber build out, for instance, or they can’t get components that are required to build out the data center. And and there’s a number of things required, obviously. You have to have servers. You have to have GPUs.
You have to have networking. You have to have optics. If all those things don’t line up, they’re gonna they’re gonna tap the brakes a bit. When all those things are lining up, they’re hitting the gas. And so it can be a number of things.
I don’t think it’s a macro issue, though. They’ve all signaled very strong CapEx plans for the year. So I don’t think there’s a macro issue that that we would see here, but I do think that there can be things like fiber build out or or specific supply chain issues that might be slowing down in some cases.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: So interesting disclosure Cisco made on the last earnings call that about half of the 350,000,000 of AI related business were optics. Mhmm. I I sort of feel like that’s not been well appreciated. So maybe help folks understand how optics fit into that, and what’s the what’s the AI business, and what’s the the optics
Bill Gartner, Runs the, optical, business unit, Cisco: in there? And I’d say there’s three categories of optics that I would consider in that. One is the the obvious client optics that are used in in switching, networking. And then then there’s for some applications, when a when a hyperscaler has indicated to us that it is an AI build out, and in some cases, they call it their AI WAN as an example. It might be an optical system capability that we’re we’re delivering to that customer or a ZR class optic.
Most of the hypers all of the hyperscalers, I would say, this point are are deploying ZR class optics, either ZR or ZR plus 400 gig for the most part. And that when they have signaled to us that it’s part of an AI build out, we would include that as part of our AI business that we are we’ve signaled to the street we’ll do a billion dollars this fiscal year. So it includes really any any of those three, the transceiver optics or optical systems or ZR, but but it has to be part of an AI build out. We wouldn’t consider that part of just a normal WAN development for, for any of the customers.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: Now Cisco did a a prep product press release last week that caught my attention. So I want a little bit of help kinda putting this in context and significance. It was a a new three nanometer 1.6 terabit PAM four DSP Mhmm. 200 gig per lane. So I don’t know that people typically associate Cisco with those kinds of products.
Maybe give us a little bit of background on what’s the the strategy here.
Bill Gartner, Runs the, optical, business unit, Cisco: So Cisco I think one of the things that’s underappreciated about Cisco and especially the optics area is that we do have we do have one of the biggest optic optics businesses on earth, and that’s largely due to the fact that we serve all of our customers, whether it’s an enterprise customer or commercial, public sector, service provider, or hyperscaler with optics that that are sold as part of our routing and switch sales. We also sell optics to customers that may have chosen a third party vendor, whether it’s white box or a competitor. And in that case, we we’re happy to offer optics to that to that customer. And we we should talk a little bit more about that buying behavior as well. We developed some of those optics in house, and we we source some of those optics from the industry.
And in what we announced last week was really the a 1.6 DSP that we’re developing that’s actually being developed as part of the Acacia team who has very significant experience in developing DSPs, obviously. And we’ll be offering that DSP to module manufacturers who may wanna incorporate that into their module and sell it on the street, and we’ll also be including that in our own optic that we’ll be offering to customers.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: And just to clarify, I believe this is 200 gig per lane inside the data center, not wide
Bill Gartner, Runs the, optical, business unit, Cisco: area network. That’s correct. This is not a ZR optic. This is a
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: more conventional short reach optic that would be used inside the data center. So let let’s pivot the discussion to the proverbial, elf in the room. Mhmm. Copackage optics. I don’t think you could swing a dead cat and not run into that conversation.
Let let’s start off at a very high level. Just what’s your take on the technology?
Bill Gartner, Runs the, optical, business unit, Cisco: So I would say I wanna offer somewhat of a balanced view here. I think that’s gonna be we’re in a frost cycle in the last, I would say, month or so around co packaging. Cisco demonstrated a co package solution at OFC, actually, two years ago, OFC twenty twenty three, and it was a 25.6 terabit switch that included co packaged optics. I think Broadcom demonstrated one at the same time. And I would say the industry response and by the industry, I mean, primarily customers.
The customer response was fairly muted muted to negative. And and the reason there there were a lot of reasons why customers looked at that and said that’s that may not be something I really wanna wanna consider deploying. So let me just outline a few of those reasons that that became apparent to us at that time. One was that the, if you think about today, there’s a number of, silicon providers, Broadcom, Cisco, NVIDIA, a couple of others out there. But there’s many optics suppliers for the industry, and and our customers benefit from the fact that there’s a multi vendor optics community that they can leverage and make sure that they have supply chain integrity and diversity and and choice and and the ability to negotiate.
When when you build a co package solution, you’re effectively consolidating the value of both the silicon and the optic into one monolithic structure that then really deprives the customer of that choice on the optic. And I think many customers looked at the co package solution and said, you know, there are power benefits. That was the primary argument for co packaging. There’s a power benefit that that’s delivered, but that power benefit is probably not worth trading off the the multivendor supply base that I have in optics. I think that was one issue.
The other is that optics today in many applications are pay as you grow. Meaning, they buy a switch with 32 or 64 ports on it, and they populate that over time. And and they take advantage of the fact that they can defer the cost of that over time. That that goes away with with co packaging because all of the optic is basically delivered day one. And then I would say there are applications where customers mix and match optics on a given switch or router.
It’s not one monolithic optic like a two kilometer or a 500 meter optic. There’s mixing and matching depending on the infrastructure, depending on the the needs. You might have ZR optics and short reach optics, for instance, in in one router. That goes away as well. And so at that time, I think the feedback we got from the industry was we should pursue other ways to reduce power while preserving a multi vendor optics environment.
And other ways included things like looking at LPO and LRO, ways that we can focus on the optic and take power out of that solution, but still preserve the multi vendor optic space. So that was kind of the state of the world in 2023. NVIDIA, just last week, announced that they’re going to be deploying co packaging as part of a of their InfiniBand switch and ultimately as part of their Ethernet switch. And I think that’s kinda juiced up the industry in terms of, hey. What’s happening here?
I’m not sure that the fundamentals that I outlined as some of the objections for the industry have really changed. So I think it’s gonna be interesting to see what the customer take is for this and whether the power savings that are delivered are really are really worth trading off some of those the the things that they would get otherwise. And one other thing I would say is when we look at power savings, you know, you you I think you really have to look at the the whole solution, like, whole AI solution, whether it’s within a rack and a scale up or whether it’s across racks and a scale out. My somewhat cynical analogy is if you if you reach into your refrigerator and you you replace the incandescent light bulb with an LED light bulb, you can claim 70% power savings on the light source for your refrigerator, but you’re not gonna change your electric bill. And I think there’s some of that that we have to look at in this this context as well as to say, you know, yes.
We can get, say, 30% power savings if you look at the switch plus the optic. But what does that really represent as as a a part of the total power that’s being consumed in a in a GPU structure? There’s an argument that says, look. Every little bit counts, and we should we should get every little bit we can. So that’s that’s a fair argument, but I think you have to look at what the trade offs are.
And so I think I would say that we’re we’re in a bit of a a state of let’s see how this plays out. I think we’ll see some trials. We’ll see some customers dabble with this. There may be a compelling event at some point in the future where co packaging becomes the only option for us. I don’t think we’re at that compelling event yet.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: Maybe step back a little bit for yeah. In a audience that’s financial analysts, not technologists, What what’s going away when we do CPO, and how does that compare with, you know, the acronym soup LPO, LRO? Yeah. Walk us through some of the basics here.
Bill Gartner, Runs the, optical, business unit, Cisco: So in co packaging first of all, in today’s world, you have a switch or a router that fundamentally has a piece of silicon in it. That silicon has to deliver traces to the faceplate where there’s pluggable optics that that would be plugged in as the customer needs the the, the given capacity. And a typical switcher router has 32 or 64 ports on it. Each one of those ports accepts a pluggable optic. That pluggable optic is delivered by suppliers in the optics industry, and the switch or router is delivered by guys like Arista, Cisco, Juniper, Nokia, others.
And and we we, of course, also deliver optics, but customers have a choice of using our optics or some some third party optics. That’s always a choice that they have. One analogy to think about is it’s a little bit like if you have, for instance, an HP inkjet printer at home. You buy your ink from HP because you’re worried about you know, you’ve read the warranty and the warranty says if you buy your ink from somebody else, your your warranty is void. If you buy your ink from HP, then you look like a sort of a very conservative customer that we would have that would say, I’m gonna buy my optics from Cisco, and I’m gonna buy my switch from Cisco because Cisco’s gonna take care of me.
And if there’s any problem, I know it’s gonna be taken care of. If you buy your ink from a third party because maybe you use a lot of ink and you really wanna save a few pennies, then you look like the customers that are saying, you know what? I’m gonna buy optics from a third party because I’m pretty sure it’s gonna work, and I’m not too worried that things are gonna break. And I would also say if you buy ink by the barrel and fill your own cartridges, you look like some of our hyperscaler customers. And so that’s kind of the analogy I would use for the optics world.
We have customers that buy directly from us. We have customers that buy from third parties and customers that try to build their own as well. That goes away effectively. That choice goes away with co packaging because co packaging suggests that you take the guts of the optic and you physically package it with the silicon. And so it now becomes one monolithic structure that’s mounted on the switcher router line card, and the only thing coming up to the faceplate are fiber connectors, but there’s no more pluggable that’s part of that solution.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: And and the LPO LRO options? So
Bill Gartner, Runs the, optical, business unit, Cisco: the LPO LRO options are basically playing a bit with some of the innards of the the optic to say, if we removed some of those some of those pieces and and ask the switch silicon to work a little harder, could we could we reduce power? And that’s really what LRO and LPO is all about is it’s it’s basically shifting some of the problem that is dealt with in the pluggable optic today into the silicon, into the switch silicon, and delivering an end to end solution through better switch silicon performance, and a little bit better, optic performance, but removing things like a DSP that might sit in the optic today. And in doing that, you can achieve some pretty significant power savings, maybe not as quite as much as as co packaging, but pretty significant.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: Great. And and I guess one of the things that intrigued me out of out of NVIDIA’s announcement a couple weeks ago was that initially their CPO would run on an InfiniBand switch. Mhmm. So in my circles and your circles, there’s been this debate about AI clusters migrating from the InfiniBand protocol to the Ethernet protocol. So now that we consider the fact that CPO will initially run on an InfiniBand switch, what does that tell us about this evolution, transition, competitiveness?
Bill Gartner, Runs the, optical, business unit, Cisco: Yeah. So I think it’s first of all, not surprising. And NVIDIA’s got a large embedded base of InfiniBand switches, so it’s not surprising that they’re gonna leverage that first. The other thing I would remind people is we we announced a couple weeks ago a partnership with NVIDIA where NVIDIA is actually qualifying and including Cisco silicon as part of its reference architecture. Cisco silicon and optics in as part of its reference architecture.
We’re the only silicon provider other than NVIDIA that’ll be standardized as part of that reference architecture. So over time, I would expect the customers are gonna migrate to Ethernet. Ethernet is much more widely deployed than InfiniBand. We we believe Ethernet has much longer life, in AI applications. And so over time, I would expect the industry is gonna see a much bigger shift to Ethernet, and that would include Cisco silicon as well as in the NVIDIA silicon.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: So maybe the partnership’s intriguing. What is sort of the rest of Cisco’s play and whether it’s LPO, LRO, co packaging? What what else are you doing in this context? So we are and and,
Bill Gartner, Runs the, optical, business unit, Cisco: in fact, we’re we’re demonstrating, capabilities here at OFC for delivering optics into the AI stack, whether it’s a scale up or scale out solution. That that includes, for instance, 400 gig optics that would go into the NIC. It includes 800 gig optics that would go into a switch. That that will be used as part of a a scale up or scale out. It would also be used in an enterprise AI application.
And I think that’s the big big thing to come still in in in AI. And it’s part of the reason why Cisco is partnered with NVIDIA is NVIDIA brings a lot of the the technology in the form of GPUs. Cisco brings the the access to the enterprise customer base. And with that partnership, we expect to be delivering AI solutions for enterprises that wanna have an on prem solution. Obviously, that’d be a much smaller solution than what what training model looks like, but it will be a highly optimized AI solution for customers.
We’ve we’ve, called that the, AI HyperFabric, and that will be used for enterprise customers that wanna run an inference model on prem. And that would include our, our networking, our optics, our software managing that, and then NVIDIA GPUs and mix.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: So one of the topics I feel, like, has been glossed over in these announcements is how does one manufacture CPO?
Bill Gartner, Runs the, optical, business unit, Cisco: Mhmm.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: So maybe you can help folks understand what what are the hurdles to bringing this kind of technology to market.
Bill Gartner, Runs the, optical, business unit, Cisco: Yeah. I think that is a bit underappreciated in the a lot of the, you know, the enthusiasm around CPO. The my view is that that is going to be the the major barrier in terms of CPO really penetrating the market in a significant way. If you think about the today, the in the industry around silicon is very mature. You know, this industry knows how to package silicon, how to how to cut up a a wafer, and then package that whether it’s for, you know, an Intel CPU or whether it’s switch silicon.
That industry is very mature. There’s an industry that is also mature around optics, making transceivers and dealing with some very specific optics issues like fiber attach. Like, how do you attach a fiber to a piece of silicon? That that’s a that’s a process that requires a lot of skill, a lot of development, and, and has certain yield associated with it. When you bring those two together, we have to find sort of the who is the where’s the ecosystem that that does both of those together?
Because now we’re talking about packaging silicon and optics together and dealing with all of the issues that you have to deal with in packaging silicon and all the issues that you have to deal with in packaging and optic. And, you know, when we talk about large scale CPO, we’re talking about something that would include something between two and four thousand fibers. So the manufacturing challenges, whether it’s a fiber attach problem, the process development has to be very, very, very high quality, much higher quality than what we have in the industry today. We’ll have new connectors that have to be created to allow for, sort of modular approach to manufacturing because you don’t wanna build one of these things and then go test it and find out that it doesn’t work. You need to kinda build it in a modular way.
Connectors that have to be attached to the optics chips, chiplet, and then MPO connectors to actually deliver the optic to the faceplate. It has to be fiber routing. How do you route all that fiber on this this switch line card? So many, many manufacturing processes that really have to be refined here. And I’d say the industry is at an early stage of that, and history says this is gonna be a multiyear challenge for the industry.
This is not something gonna be solved in a couple of months. This is gonna be a multiyear challenge to get the industry to a mature state where manufacturing can be done in a in a highly reliable way with very high yields. And people look at sort of the, you know, the PowerPoint slide and say, well, the cost should be better with this. Reliability should be better. You can kinda wave your hands and commit yourself that that’s the case, but that all presumes that we’ve overcome all these manufacturing challenges.
Reliability can be better, but it would be much, much worse if we have problems with, you know, fiber attach or problems with how the connectors are are managed. So all of these issues really are still ahead of us, I think, as an industry.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: How do you square that with NVIDIA’s timeline of suggesting their InfiniBand switch will be out before the end
Bill Gartner, Runs the, optical, business unit, Cisco: of this calendar year? I think you should probably ask NVIDIA that. But, I would say, generally speaking, like, we’ve built co package solutions. They can be built. The question is, can they really be built in volume at scale and with the appropriate reliability?
And and, you know, building tens or hundreds or something is very different than building thousands or millions or something. So I think that’s really the the where the challenge in the industry will be.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: And maybe take this into sort of the the implications of what does this mean for the laser. So there’s sort of this argument of what lasers are most likely thing to fail, so you wanna have your lasers separate. Mhmm. How are how are they getting around that?
Bill Gartner, Runs the, optical, business unit, Cisco: So the the basic architecture of co packaging will have an external laser source. It will be a pluggable laser or set of lasers that plug into the faceplate that deliver the the op the the laser to the co package solution. And that does remove the most likely failure element. The the laser is of of the various piece parts, assuming it’s silicon photonics that’s on the co packaged optics and silicon that’s part of the switch, the laser is probably the most likely failure element. But, again, that presumes that we’ve got connectors and we can we can mount these things in an appropriate way that that that’s robust.
So I think the architecture of this is is will rely on these, external laser sources. NVIDIA, for instance, I think had 18 of those on the switch they demonstrated. So that will help convince people that the reliability can be good. But, again, we have to overcome all these yield and manufacturing issues to make that really true.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: And what’s your take on the implications for your transceiver business then? So if we start doing co packaging, we have fewer transceivers. Mhmm. Do transceivers go away? Do some transceivers go away?
What’s
Bill Gartner, Runs the, optical, business unit, Cisco: I think I think that the transceiver business is going to be a robust and growing business even in even with co packaging. And if you look at some of the industry analyst views, the the pluggable market continues to grow. Copackaging sort of sits on top of that as part of the growth. If copackaging is wildly successful, it will eat somewhat into the into the pluggable market, but, like, 20% is the highest I’ve seen in terms of a forecast for that. And I think we have to think remember that co packaging will find its home in those most dense applications that require, like, a homogeneous set of optics, fully populated day one.
That tends to be in the hyperscaler, large scale applications. It doesn’t doesn’t occur on the WAN side. It doesn’t occur, in service provider markets. It doesn’t occur in enterprise markets. So I think pluggables will still have a very long life.
And and I guess, maybe time undetermined, but what are the implications for your switches that would go into these these AI fabrics? Yeah. So presuming co packaging becomes a we we start to see customer pull for it as as opposed to vendor push. That that would mean that we would start to build switches probably two flavors. One that has co packaged option and one that has pluggable optics.
And and then the question is, do we hit a wall where you the only way you can you can get to a certain scale is with co packaging. I think that will you know there there is a potential for that wall to hit us. And if you asked me two years ago, I would have said when we get to 200 gig SerDes, we don’t really know if we can support pluggables. Today, that’s commonplace. If you ask me a few months ago, I would say maybe it’s 400 gig SerDes.
But you and I were at executive forum yesterday and saw many suppliers out there saying, look. We’re gonna solve the 400 gig SerDes problem. So I don’t know if that wall is really right in front of us or not. If I had to draw a line today, I’d say 400 gig SerDes is probably the first point you’d see CPO really deployed in any volume. And then the question is, is it really a hard wall, or is it just, you know, are there other reasons why CPO is being deployed for customers?
I don’t I also don’t think, like like, a campus customer, enterprise customer is not gonna typically require the same scale that a hyperscaler would.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: So it it sounds to me that you’ve given, like, a good argument for why LPO and LRO are a good compromise for the industry. Mhmm. Where’s where’s the market? Because sitting at OFC, you wouldn’t think that’s the case. Yeah.
But it seems like a logical argument. What how would you make that argument? Where where do
Bill Gartner, Runs the, optical, business unit, Cisco: you think the reality is? I think the reality is that we will see LPO and LRO solutions at hundred gig SerDes, and we we will likely see them at 200 gig SerDes. I think beyond that, LRO and LPO start to have some really significant challenges with things like signal integrity. But I also think, you know, I I am I’m a complete believer in the innovators in the industry. And when people see what appears to be a hard wall ahead of us, I have faith that this industry innovates.
Power is a huge, huge issue for our hyperscaler customers in a given application. It is probably the most significant issue. It’s not the most important issue for many other applications in the in the networking world. So we have to solve power problems. We have to find ways to incrementally improve on power.
I think the industry has got a lot of thoughts on that, and things like liquid cooling will play a role there as well. So there’ll be other other
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: things that come into the market that help to reduce the the power problems that we’re facing today. So I wanna I wanna pivot to a different technology that I don’t feel like gets much attention, but it’s this concept of an optical switch. Mhmm. So I think Google’s been public about building its own. We’ve seen a number of companies enter the space.
So maybe just sort of first sort of set the the foundation. What is an optical switch, and how are they being used?
Bill Gartner, Runs the, optical, business unit, Cisco: Yeah. So I think the easiest way to think about optical switching is if you had a if you were looking at a patch panel where you would manually pick up a fiber and connect it into the patch panel so that you can connect it to another fiber, There these manual patch panel panels exist in customer data centers today and in labs for customers today, and that’s how we move traffic around. Somebody walks up to it and picks up the fiber connector and moves it to another port. It’s a it’s a physical connection that that’s moved. An optical switch or an optical cross connect effectively automates that.
So it uses technology like MEMS technology or or in some cases, silicon photonics technology, in some cases, actually robotics technology to actually affect that switch. It’s a slow moving dynamic typically, but it’s effectively automating what what was done with a manual patch panel. I will say I have two patents in optical cross connects, both of which are expired, which tells you how long this technology has been around looking for a real a real problem to solve. Historically, it’s been around for quite some time. It hasn’t proven in economically for a given application.
And so I think today, are now seeing some applications where optical circuit switching may have a role to play. I think it’s early stage. Google has been probably the most vocal. But I’m certainly aware of others that are that are now either contemplating deploying this or or in early stages of deploying it.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: I guess one of the things I’ve been struggling with is I feel like it’s maybe a misnomer to to call it an an optical switch rather than to to call it a cross connect Mhmm. Or an automated patch panel. And I I guess there’s some nuance to that. My understanding is they’re not switching every packet.
Bill Gartner, Runs the, optical, business unit, Cisco: Yes. That’s very important to understand. When it’s called an optical circuit switch really to, I think, harken back to, like, literally circuit switch days where things were basically nailed up and you stayed up for a phone call. And that that switch was just put in place for the duration of that call. That’s that’s really where the name came from.
An optical cross connect, same sort of thing where the idea is that a connection is made between two ports and that connection stays, but there’s you can think of it as literally a passive connection between port x and port y. There’s absolutely no packet processing taking place there. In fact, there’s very little insight about what’s going on there. It’s you’re effectively connecting one port to another port physically. The light passes through.
There might be some level of monitoring on that, like, what’s the signal level of the light, but but there’s no no idea what’s going on inside that that wavelength. Nobody’s looking at the packets to try to examine what’s happening. So it’s a slow switch, typically. It’s not a fast moving switch at
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: a a switch of a packet’s speed, for instance, and it’s got very little insight about what’s really being carried on that that wavelength. So are there use cases that either can flatten the network or can reduce the number of actual electronic switches either based on Silicon One or Broadcom’s Tomahawk. Can we reduce the number that somebody needs with the use of optical switches?
Bill Gartner, Runs the, optical, business unit, Cisco: I think so. I think there’s potentially some use cases, and I would say it’s very early stage. I don’t think there’s a lot of use cases. I I will be surprised to see this become a a huge portion of the switching market. Google’s Google’s case, I think, is probably most well documented and and advertised at this point, and that is, really to improve the reliability of their infrastructure.
They’ve got a unique AI infrastructure where and and this is somewhat of a surprising point for a lot of people, but there are when you when you have thousands of GPUs interconnected with optics, something is gonna fail, and something’s gonna fail at least once a day. And in AI, unfortunately, the the challenge is that when something fails, everything stops. And so you basically have all these GPUs idle until you can fix that problem. That’s a very expensive problem. And, Google’s approach really is to say we’re gonna have effectively a a spare bay of equipment.
And if something fails in in one bay of equipment that includes GPUs and optics and networking, we’re gonna just switch over all of that traffic to another bay and restart everything. And we’re doing that with an optical cross connect. And and you can think about that like that. That’s literally like, the alternative could have been they they send somebody into the lab and start moving all the fiber connections from one bay to another bay. That’s effectively what they’re doing with this optical switch.
And they effectively increase their improve their utilization by doing that switching. So that’s one application. We’ve seen other customers begin to think about, like, in a spine leaf architecture. Similarly saying, if a leaf fails, you could effectively switch to an a spare leaf and just have a bring on a new leaf, with an optical switch, effectively creating the same infrastructure, but with a spare leaf now that’s passing through this optical switch. Again, it’s kind of a reliability argument.
There’s there’s some thought that says, well, for AI workloads, you kinda set up the, the workloads between GPUs and say, for this for this workload that might run for hours or days, we’re gonna pass traffic from port x to port y, and that’s all the switch at this level in the network is gonna do. It’s gonna pass traffic from port x to port y to get from this set of GPUs to that set of GPUs. And then then I think people are looking at that and saying, do I really need a switch, an Ethernet switch to do that, or can I just have something that blindly passes everything coming from port x into port y? And that’s that could be the role of an optical. Circuit switch, again, it’s not doing packet processing.
It’s basically, you know, I don’t wanna call it a dumb switch, but it’s a dumb switch in the sense that it’s not really examining any of the packets. And I think there’s that’s an application where it could replace an Ethernet switch. Again, I don’t see that as, like, putting a big dent in the Ethernet switching market, but, you know, this is early stage for this technology. So there’s a number of startups playing in this game. There’s some established players who are playing in this game, and I think there’s a fair amount of investment going into this.
So, you know, this we could be at a at an inflection point where this technology actually starts to find a home in in various applications.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: So another another topic that’s sort of been bandied about is the idea that coherent optics, sort of the the classic Acacia wide area network technology, is making a a move to find applications inside data centers.
Bill Gartner, Runs the, optical, business unit, Cisco: Mhmm.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: Little bit of more talk about that. I’ve struggled with it on just the basic economics. The price points are coherent or much higher for good reason. They do more, send signals further. Can you help us understand, I guess, from the demand side, what’s what makes this more interesting?
And then from a technological side, how do you how do you take a relatively expensive technology and make it cheaper? Yeah.
Bill Gartner, Runs the, optical, business unit, Cisco: So let me geek out for one minute here and just explain why why people make that claim. When you first of all, dispersion is a an impairment that occurs in the fiber that that limits how far we can send a signal. It’s just the it’s it’s a characteristic of the fiber, and it limits the distance that we can send a a given signal. When you double the bit rate, say, from 100 gig to 200 gig, the the penalty for dispersion actually is more than double. It’s actually four times.
So, theoretically, when you’re going from 100 gig let’s say, at 100 gig, you could go, 10 kilometers as just an argument. If you double that to 200 gig, your penalty for dispersion is actually four times as bad. So that means you at 200 gig, you might only be able to go two and a half kilometers. So every time you double the bit rate, you’re gonna have a factor of four impairment in terms of the distance. And so the argument for coherent says, well, our our approach inside the data center has been to double the bit rate, double the bit rate, and then we gotta add a few tricks like PAM four.
But at some point, we’re gonna get to the point where we wanna increase the bit rate, say, from 800 gig to 1.6 t or 1.6 t to 3.2 t, and that dispersion penalty becomes so bad that I can’t send the signal over a conventional distance. A conventional distance in the data center is two kilometers or maybe 10 kilometers for some applications. And so what was what worked at 10 kilometers, for instance, at 800 gig is probably not gonna work at 1.6 t. Maybe the best we could do is two kilometers. And what do you do now for the customer who says, really need 10 kilometers in my data center?
Or what worked at two kilometers might not might not work at two kilometers anymore. Now we’re talking, like, 500 meters. So what do you do? And the answer is you gotta add some more tricks in order to solve that problem. And we’re now sort of out of the bag of tricks that are the easy relatively easy solutions.
PAM four was one of them. And now we have to start relying on some of the tricks that we played in in the coherent world in order to send signals over very long distances. In coherent, we, you know, if you remember, know, fifth ten, fifteen years ago, people were kinda stuck at 40 gig and said, I don’t know if I can send 40 gig over a reasonable distance in the network. Now we send terabits, and we’re relying on this coherent technology, does very sophisticated signal processing on the optical signal. And so the the argument is that as the bit rate increases, the penalties are going to continue to increase, and we may not be able to send the signal over the distance required in the data center without applying some of the tricks that exist in the coherent world.
And so you have to bring some of those tricks into the short reach optics. Now your your question is, well, you know, the economics aren’t gonna make sense for that, and that’s exact that’s true. The question is, well, do you wanna do you want that do you wanna go to 3.2 terabits or not? Because at some point, the physics is gonna work against you, and the only way you’re gonna get there is by increasing the, sort of, sophistication of that solution. Now that doesn’t mean you have to bring in in a in a coherent solution today.
We can send the signal, you know, thousands of kilometers. We don’t have that need inside the data center. So people talk about coherent light as an example of saying, let’s bring some of the coherent technologies into this solution and apply it inside the data center so that we can go two kilometers or 10 kilometers. And I think that’s where we’re gonna see some industry investment is to say, what’s what’s the what are the elements of that coherent solution that you could bring in minimally in order to in order to not have the cost, you know, go go too far out of line, but solve some of these problems that you’re gonna
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: have with the impairments? Is there a way you could help us understand the relative price points? So 800 gig inside a a data center PAM four device compared to an 800 gig in a ZR pluggable? What what’s the relative price difference there?
Bill Gartner, Runs the, optical, business unit, Cisco: Factors of difference. I would say anywhere from four to 10 x, it could be. Right. So it’s a big big price. It’s not like 10% price.
It’s factors of pricing difference. So and that’s that scares people. It’s like, how do you get that, how do you get that savings while you’re still trying
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: to achieve the the the solving the problems inside the data center? And and I guess one of the sort of classic arguments I hear is it’s it’s about volume. Mhmm.
Bill Gartner, Runs the, optical, business unit, Cisco: And
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: so I’m struggling whether or not this will apply. I think of the analogy of or the comparison to high definition televisions. First, high definition television, $10,000 apiece, 10 people bought them. Mhmm. Now they’re $200.
Everybody has one. Who wouldn’t buy So are we at this situation where coherent technology is expensive because it’s a low volume relatively device? Is it a question of, well, if you ramp your volume, then it’s price competitive, and how do you cross that chasm?
Bill Gartner, Runs the, optical, business unit, Cisco: I think, certainly, volume is a huge factor in the cost. And and, yes, as we when you’re talking about sell selling, you know, tens of thousands or something or hundred thousand something versus millions, the volume the cost curve changes pretty significantly. I think there’ll still be a premium for a coherent solution, but to the extent that it’s adopted widely and we we do get those volume benefits, the cost will come down. So that that factor of difference that I mentioned today will certainly get compressed over time, but there’s gonna have to be early adopters for that. You know, somebody’s gonna have to start deploying the early adopters will have to start deploying and paying that premium.
Then I think we will see compression of the cost curve.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: And what’s your sort of best prediction of how or when that happens? What’s the timeline?
Bill Gartner, Runs the, optical, business unit, Cisco: So I think I think we’ll see 1.6 t at at two kilometers. Well, as an example, 800 gig, I think we we saw OIF define a coherent solution and an an IMDD sort of traditional solution for 10 kilometer application. At 1.6 t, I think two kilometers will be that breakpoint. If you need to go beyond two kilometers, you’re gonna have to bring in some additional technology like a coherent technology. So we will see a coherent light, I think, for 10 kilometer applications at 1.6 t.
And then at 3.2 t, it could be
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: a two kilometer. And you see that as, like, two years away, three years away? Cup two years away, I would say. Yeah. So I I wanna I wanna pivot the discussion to to the wide area network.
Mhmm. So we spent most of the time on the data center because that’s what people care most about. But I I think there’s been this debate about, well, okay, we get the AI thing, we get the clusters. What are the implications for wide area network? Mhmm.
Bill Gartner, Runs the, optical, business unit, Cisco: Yeah. So certainly for the the hyperscalers that are defining their their what was a conventional wide area network and so now the AI WAN. And it’s mostly those that are using inter data center communication to effectively create a a larger scale AI network. We’ve seen dramatic dramatic take there as you alluded to earlier in even our first half results. We’ve seen a lot of optics being used in that application.
But I think more widely, as we start to see AI applications in enterprise take hold, I think our service provider customers are gonna begin to see much more traffic, whether it’s because they’re hosting the AI application for those customers or even in the case where the the AI application is on prem and and there has to be some reach back to a, to a data source, for instance, there we’re gonna see different traffic patterns. And so we do expect that our service provider customers are going to see growth in their networks as well as AI starts to
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: take hold in a in enterprise applications. So you’ve been a proponent of of a alternative architecture or technology known as ZR. Mhmm. This is the idea of pluggables used in the wide area network. Maybe help us understand a little bit of of the argument and of the economics.
I know we’ve been down this path before, but, you know, a little bit of a refresher. And where does the industry stand today on the transition from platforms to plumbles? Yeah.
Bill Gartner, Runs the, optical, business unit, Cisco: So I would say, first of all, the the we acquired Acacia four years ago this month. It’s been a terrific acquisition for us. And Acacia at that time was delivering technology to, its customers, including Cisco for optical system solutions, traditional transponders. But Acacia had also innovated in the area of taking that technology and putting into pluggable form factors. And we saw a trajectory where that pluggable would effectively be the same form factors that’s used in the inside the data center.
So there was no there was no custom form factor required to support coherent. And we felt that would be a tipping point for customers adopting a coherent, pluggable solution that would go directly into a router and replace a transponder. The way you can think about this is if, let’s say, you have to send an optical signal, 500 kilometers. One way to do that is to you buy an optical chassis that has a transponder that can send that signal 500 kilometers. But now you can actually take a pluggable optic, put it in the router, and directly send that signal 500 kilometers, eliminating the chassis, eliminating the transponder.
And when we did started to do some economic analysis on that, we saw that a transponder would typically would be two hundred two hundred fifty watts, and a pluggable was typically 20 to 25 watts. So customers will get a 90% power savings by eliminating the transponder, eliminating the chassis, and all the associated hardware that goes with it, like controllers and fans, and replacing that with a simple pluggable that goes directly in the router. And on top of that, we felt that largely due to the volume curve that you alluded to earlier, as customers began to adopt this pluggable technology, that we would start to see that become a much, much more cost effective solution than a transponder. So apples and apples, pluggable is gonna be cheaper than a transponder today. And so customers get day one CapEx savings and day one OpEx savings, power savings and and, cost savings.
And our prediction was that that would that would be the dominant deployment model for inter data center communications, which is often, you know, hundred kilometers, maybe a few hundred kilometers. And we penetrate into metro applications where service providers would start to adopt that as well and even into long haul applications. And and I have to say we’ve we’ve been very, very pleased that that that has really that’s what we’ve seen now. The top five hyperscalers are all deploying this technology in their in their inter data center applications. That’s where the the volume really is, but we’ve got 300 service provider customers now deploying this technology as well in metro applications, and we just introduced 400 gig for a long haul application as well.
So my prediction is that this technology is gonna continue to attract a lot of the industry investment. It’s open. It’s it’s compliant with standards, which is which is new for the optical industry because, historically, it’s been a closed proprietary solution. So it gives customers choice where where before they did not have choice. The economics are super compelling.
And so I I see customers over time, we will see this become the dominant deployment model for metro networks and even many of the
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: long haul networks. And and I have the impression that this market had a a pause during 2024, and it sounds like it’s sort of restarted and is getting getting moving. Was that a supply chain related event? Was that just,
Bill Gartner, Runs the, optical, business unit, Cisco: customer concentration? What what changed? I think the service provider market in general has has been slower to sort of recover from what was a supply chain issue. But, also, many of our customers invested in five g infrastructure, did not really see a return on that investment, I think, are looking for what what is the next thing that would stimulate them to to build a new network. And in many cases, it’s a refresh, like, just a technology refresh.
So I think we’re probably into a normal, technology refresh cycle here. I don’t think we’re gonna see, there’s no five g out there. AI could be a stimulant for service provider customers that either begin to host AI applications for enterprise customers, or as the enterprise market starts to build out, on prem solutions, we can start to see service providers having to build out more capacity as well. So got a couple of questions on on email that I
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: just wanna hit before we we run out of time. One of which is, how do you sort of how do you see that your position in optics helping to pull through sales of either switches or routers or your silicon one? What’s the sort of attach rate and the the relationship between the businesses? I would say optics probably doesn’t pull it’s more of
Bill Gartner, Runs the, optical, business unit, Cisco: the opposite. Think Silicon One helps us pull through optics sales, but I would also say that Cisco has, without question, the most significant, standard qualification process for optics in the industry. I would put our qualification process as the the gold standard for the industry, and that we test optics against every single optic spec, every single electrical spec, under all operating conditions, under all environmental conditions. And we we subject them to really brutal brutal, corner cases. Like, we’ll we’ll put the optic into an environment that looks like a host, and we start to vary the voltage on all the the signals coming from the host.
We start to vary the skew or the timing on all those signals. We do that at various temperatures so that when we put a Cisco label on an optic, it is, it is absolutely gonna be a robust, very highly reliable solution for our customers. And I would say the one area that we are now seeing more take is that customers are buying our in some cases, our optics, for third party applications where they’ll they’ll say, I’m gonna buy Cisco optics and put it into a white box solution or a competitor product. And so that, I think, is a new dynamic for us, and I think our, our qualification is helping us to give customers confidence that when they put the optic in, it’s gonna be highly reliable.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: And when we think about the the evolution towards these enterprise opportunities, so I think it’s really intriguing. You’ve got this partnership with NVIDIA, run NVIDIA on Silicon One. Do you see optics getting pulled in? Will will enterprises be employing your optics in AI solution?
Bill Gartner, Runs the, optical, business unit, Cisco: Yes. So if completely. So for AI Hyperfabric, which is Cisco’s enterprise AI solution, that will include Cisco switching as well as Cisco optics as part of
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: that infrastructure. So you made a comment very early that I sort of glossed over, but somebody is coming back to me. You you used the phrase hyperscalers might be taking a breather. You know the world I live in. Was that intended to to be sort of something that you’re seeing right now?
Can you elaborate on sort of what you meant by that?
Bill Gartner, Runs the, optical, business unit, Cisco: I should be very clear about that. I think in net net net, our our business is up up into the right here with hyperscalers. But when you when you sort of look at each one individually, you’ll see different dynamics. Some are in a high demand and, you know, high stepping on the gas, and others might be, might be waiting for something like a fiber build out or a, or certain parts that they’re trying to get. You know, on average, we’re seeing uptake in demand here.
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: And when we think about sort of the the the business, I think we’ve seen the the power of the, you know, purse for the hyperscalers. Mhmm. How do you sort of think about the idea that they’re crowding out telcos as as customers? I
Bill Gartner, Runs the, optical, business unit, Cisco: don’t really see that. I mean, I see I actually I think the telcos benefit from what the hyperscalers are doing because if the telcos look to the hyperscalers and say, I wanna be on that cost curve, and they they adopt the technology that hyperscalers are adopting usually earlier, The telcos can get effectively sort of a a significant cost advantage day one as they start deploying technology. They’re they’re typically following the the technology curve for the hyperscalers. Hyperscalers might be earlier in deploying 800 gig or 1.6. So to the extent that the telcos adopt that technology in pretty much the form that hyperscalers are, they can get a very significant cost advantage.
That would be hard for them to get in, in you know, historically. So we’re just about out of time. I always like to close with
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: a question like the following. What do you think is the least appreciated aspect of Cisco’s optical business?
Bill Gartner, Runs the, optical, business unit, Cisco: So I would say, first of all, I think the Acacia business has been a terrific win for Cisco, and I think we’ve been right on target in terms of predicting how that technology would evolve over time, and penetrating various segments, whether it’s inter data center or metro. And the other is, I think, our optics business is a is really sort of a crown jewel within Cisco in terms of it is a huge business for us across our enterprise customers, public sector, service provider, and hyperscalers. And I think it has the potential to be a much bigger business going forward as we look at AI build outs for enterprises as well. Oh,
Simon Leopold, Raymond James, data infrastructure analyst, Raymond James: great. Bill, thank you very much. Thank you, Simon. Fun as usual. Folks, thanks for joining us.
This is Simon Leopold and Bill Gartner signing off from OFC. Thanks, everybody.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.