Tonix Pharmaceuticals stock halted ahead of FDA approval news
On Wednesday, 13 August 2025, Arista Networks (NYSE:ANET) presented a strategic outlook at the J.P. Morgan Hardware & Semis Access Forum, emphasizing its focus on AI networking opportunities. The company highlighted its advancements in Ethernet technology and AI networking, while also acknowledging challenges in expanding its customer base. Arista’s commitment to innovation and market leadership was evident, yet it faces competition in a rapidly evolving sector.
Key Takeaways
- Arista is focusing on AI networking through scale-out and scale-up technologies, with Ethernet expected to dominate by 2028.
- The company is seeing growth in its cloud titan customer base, with significant GPU cluster scaling underway.
- Arista’s new product developments, like Tomahawk 6, are positioned to meet high-density IO needs.
- The company is expanding its presence among neo-cloud customers, increasing from 15 to 25-30.
- Arista is strategically diversifying its revenue streams, targeting enterprise deployments and sovereign wealth funds.
Financial Results
- Cloud titans are expanding GPU clusters, with two customers nearing 100,000 GPUs, and all projected to surpass this mark.
- A third customer is expected to achieve similar scale early next year, while a fourth is ramping up more slowly.
- The addressable market for scale-out and scale-up technologies is projected to be equivalent, with scale-up Ethernet expected to mature by 2028.
Operational Updates
- AI Networking: Arista is focusing on scale-out (full mesh interconnect between GPUs) and scale-up (high-speed connectivity at the rack level), with 800 gig as the predominant use case.
- Data Center Interconnect (DCI): Increasingly important for connecting data centers in metro or campus environments, driven by constraints on building large facilities.
- Technology and Product Development: Arista is leveraging new chipsets like Tomahawk 6 and 5 Ultras for scale-up use cases, with its 7,700 R Series supporting disaggregated scheduled fabrics.
Future Outlook
- Market Growth: Arista anticipates continued demand for AI networking, with no slowdown expected through 2027.
- Technology Roadmap: Focus on integrating Ethernet technology for scale-up use cases, optimizing hardware and software for AI networking.
- Strategic Partnerships: Engaging with technology companies and sovereign wealth funds to explore new opportunities.
Q&A Highlights
- Disaggregated Scheduled Fabrics (DSF) vs. Leaf Spine: DSF is productized in Arista’s 7,700 R Series, offering a single modular chassis with trade-offs in cluster size and architecture.
- GPU Cluster Scaling: Tomahawk 6 supports over 131,000 GPUs with a two-tier network, requiring strategic approaches for larger networks.
- Hardware vs. Software Advantage: Both hardware design and software management are critical, with middleware intelligence as a key differentiator.
- Blue Box Opportunity: Arista’s product integrates hardware, software, and diagnostics, showcasing the company’s engineering capabilities.
For a more detailed understanding, readers are encouraged to refer to the full transcript.
Full transcript - J.P. Morgan Hardware & Semis Access Forum:
Sami, Host: Okay. Great. Yes, I think the mic is open. So thank you, everyone. I have the pleasure of hosting the next session with Arista.
And we have with us Martin Hull, who is the Vice President and General Manager of Cloud and AI Platforms as well as Rod Hall, ex JPMorgan and now in Arista Finance. So Rod, thanks for being here as well. Sure, Sami. Good to be here. Thank you both for the time.
I’ll start you with a very easy and a softball question to really hit at. But a lot of the conversation over the recent months has moved very quickly to scale up networking. And so we understand AI is an incremental revenue opportunity for the company, but just help us think about how you’re thinking about the TAM in order to scale out versus scale up and we can go from there, and particularly addressable to our rest of it.
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: Yes. So let’s level set and make sure we’re all talking about the same things. So in this AI networking explosion about which clearly we’re very excited about, I think all of you are and most of the questions on the various investor calls are primarily about AI. What is the AI networking that we’re seeing? So the primary part of the AI networking is tens, hundreds, thousands of GPUs in a physical location and you build a high speed full mesh interconnect network between those.
That’s what we call the scale out network. That’s that full interconnect, tens, hundreds, thousands. Jayshree talked about customers getting towards 100,000. In front of that is the traditional existing data center, which is connectivity to the outside world. That’s what we call the front end network.
So there’s a front end and there’s a back end. So the back end is the scale out. What we’re increasingly seeing is that at a rack level, where you might have multiple GPU enclosures, you want to be able to provide additional high speed local connectivity between those enclosures at a single rack or a pair of racks. And that is introducing this new technology of scale up rather than scale out or front end or back end. So it’s a high speed interconnect that is potentially 4x or 8x higher speed than the scale out, but it’s constrained to single digits of compute nodes, GPU clusters.
Because it’s higher speed, fewer ports, you could do some math and say that the scale out TAM is roughly equivalent to the scale up. But scale up is an emerging market. It’s not here yet. It’s new, it’s nascent. Now two years ago, we were talking about scale out being a new emerging market.
Now we’re in year two, moving towards year three of that. Forget I exactly how long since GPT burst onto the scene, but it feels like about two years. So these are incremental TAMs. And so the scale up TAM is incremental on top of the scale out, but it’s later. I don’t want to put numbers on it.
Other people can put numbers on it, but it is incremental. So when we think about how that relates to Arista, for the first phase of that, it’s going to be primarily driven by proprietary technologies in year one, maybe into year two. You’re then going to start to see the introduction of an Ethernet technology for that scale up use case. And then once it’s an Ethernet technology, it becomes a real addressable TAM rather than just this market sizing exercise. So we think maybe twenty twenty eight, maybe sooner, maybe later, it’s difficult to tell this far out.
We think in 2028 that scale of networking for Ethernet becomes a realistic addressable market for us. But I didn’t put numbers on it, but roughly equivalent to the size of the scale out network.
Sami, Host: Okay. And so you expressed the overall the confidence that Ethernet is really where the market is going by the time you get to 2028. And we saw this sort of play out on the scale out side as well, where we started with InfiniBand over time, sort of Ethernet is really what seems to be the more popular option for customers and their adoption. But is that is your expectation that scale up is also primarily Ethernet just driven by what you saw in scale out? Or are there other reasons, including sort of differentiation in the technology that’s driving that expectation that Ethernet is eventually where this industry goes on scale up as well?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: So some of the answers to those kinds of questions depend on when, not if. And we’ve seen this within the scale out, right? It has predominantly moved to Ethernet. There’s been a crossover point. But if you ask me a year ago or two years ago, and you investors were asking some of the questions, will Ethernet win over InfiniBand?
And we were quite confident in our yes. I don’t want to get quite so confident on the scale up. It’s still very early. But will Ethernet be an option? Yes.
We’ve seen this with introduction of new chipsets from Broadcom, the Tomahawk 6s, the Tomahawk five Ultras. One of the positionings for those chipsets is for that scale up use case. So there’s an Ethernet option. If you’re deploying a scale up networking today, you’re probably doing it with the predominant supplier of GPUs, which is probably going to drive you towards using their choice of a technology and V Link. Over time, as you get a choice in the GPUs, customers will express a preference for using something that’s open, flexible, multivendor over being encouraged to use a single vendor proprietary technology.
Now again, there’s been some moves within NVIDIA to open up some of the IP blocks so that other people have put into their silicon. It still makes it a closed technology. So we can have that debate. So do I see Ethernet becoming a significant part of the scale up? Yes.
How much share and how fast is where we’ll have that debate for the next couple of years. But scale up, absolutely. As you see new GPU vendors come in, GPUs, accelerators, TPUs, whatever you want to call them, I don’t see any of those embedding these other technology choices. They embed an Ethernet choice. Scale up is inevitable.
Whether it’s Ethernet or not, we can have that debate like we did about InfiniBand versus Ethernet for two years.
Rod Hall, Finance, Arista: Or about five other technologies versus Ethernet. We kind of know how that all played out. Yes. The scale economics play as well. We’ll see, like Martin says, but Ethernet history has proven that it tends to be the one that ends up succeeding.
Sami, Host: Okay. Maybe before we jump into some of these sort of scale upscaled out questions and specifically to your customers, one of the other questions I get a lot from investors is how do we think about the between the data center or sort of the DCI opportunity for Orista? How do you play into that? What does that addressable market look like for you?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: Yes. So data center interconnect is an interesting technology. And I’ve said this before many times. When I launched our 400 gig portfolio three years ago, four years ago, the primary use case for 400 gig was for data center interconnect between large multi tiered data centers using 400 gig ZR technologies. The same at the time that I launched that 400 gig portfolio, I said the secondary use case was this thing called AI and ML.
And people looked at me and said, It’s for what? AI and ML. It’s DCI. So now we’re on the other side of that. 800 gig predominant use case is AI.
Nobody disagrees with that. The secondary use case is for data center interconnect. If I’m building out a campus of data centers, I’ve got six, eight, 12 buildings in a local geography, I need to provide high speed bandwidth between those physical buildings. They might be only be a kilometer, two kilometers apart. Ideally, I’d have my big super cluster stretched infinitely across all those buildings.
But in reality, I’ve got finite bandwidth. So I’m going to have to design clusters, bubbles, zones and then mesh them as best I can. So we are going to see an introduction of data center interconnect technology for joining buildings together in a metro or a campus, then we go to metro, then we maybe have to think about, well, I’ve got big buildings that are 20 miles, 30 kilometers apart, can I use data center interconnect, can I get access to fiber and how fast can we push that technology? So there’s going to be a growth in that data center interconnect because people are constrained by how big you can build a data center, how much power you can get on a campus, how much cooling, how much technology. And so that’s forcing people into multiple buildings and stretching those buildings distances apart.
So data center interconnect does become a key driver for a growth in the AI segment. And then the other debate about that is, well, the technologies that we announced at 400 gig, our R Series switches, routers, fixed and modular were perfect for 400 gig data center interconnect. The products we’ve announced for 800 gig are the same technologies. They’re perfect for data center interconnect. They’re now just being used on that back end network, not necessarily the front end network.
So yes, we do see data center interconnect as an interesting slice of that AI segment. Okay. Any way
Sami, Host: you would quantify it relative to the opportunities within the data center that you have?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: It’s always going to be a slice because you’re not going to put a full mesh bandwidth to those buildings. But then you’re typically using more complex systems, you’re maybe designing to a different set of rules. So I don’t want to size it. But if we look at it, it’s going to be it’s data center interconnect for AI is going to be used by the big players, you should probably ask about next. The bigger players will have these buildings at a similar location.
When we talk about smaller customers who have got a single location or single digits locations, data center connect is less relevant for them. So that’s why how it will split. Is there a use case? And even when there’s a use case, it’s a small percentage of that total aggregate spend.
Sami, Host: Got it. Okay. So moving to the large customers. And I think on the earnings call, said two of your cloud titan customers are at or near the 100 ks GPU cluster size in terms of deployment. So as we look beyond 2025 or even to the 2025 into 2026, what does the growth trajectory with these customers look like?
Is it just continue to scale with them? Or do you start to hit a plateau in terms of the deployment pace with them? How should we think about where to go where do you go next with these customers that are already close to the 100 ks GPU cluster sizes?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: Yes. So we can refer back to the prepared remarks on the earnings call from Jayshree and Chantal, right? Those top customers are still on track for 100 ks GPU. I don’t think we’ve given a hard date, but we did say we expect them to be there before the end of the year. All of them are going past 100 ks, right?
There’s no slowdown in the demand for AI. There’s no slowdown in demand for accelerators. There’s no slowdown in demand for networking for those accelerators, for those AI clusters. So as we go into 2026, those customers will continue to grow. We also expect to add incrementally other customers.
We may not give as much detail about them. And we don’t see that the AI growth is slowing down in 2026 based on publicly shared TAMs. I don’t think anybody is seeing it slow down in 2027. You also look at the public companies that are talking about their CapEx budgets, ’25 into ’26, they’re incrementally increasing their expectation of their spend. I think what gets challenging with some of that is that you can actually have a CapEx budget and be unable to use it all because there isn’t enough building, space, power, cooling, physical infrastructure and accelerators to satisfy this demand.
But the largest consumers of AI, let’s say, are incrementally increasing their CapEx budgets, Whether they’re risk to customer or not, that’s still a good thing. The AI market is continuing to expand, and then we can take our fair share of that. Okay.
Rod Hall, Finance, Arista: One thing I’d just add to that in terms of background, the reason that we gave some detail around these customers is because we started off with zero share in back end. And we wanted to let people know, hey, we’ve got traction in back end, which now we feel pretty confident we do have. So we no longer feel we need to be disclosed as much in terms of what customers wear, how big, etcetera. The other thing I would add is the third customer, we did say, will achieve scale. And I wouldn’t get caught up in these GPU numbers either because all that’s meant to convey is that we’re at scale.
We’re in production with these customers. The third one will be there, we’ve said, early next year. And then the fourth one is this InfiniBand customer that that’s a slower burn, and we haven’t really said much about the ramp on that. So just to be clear about that and give a little
Sami, Host: bit of background as well. And maybe the follow-up question was going to be on the fourth customer. You treat them you obviously classify them as a large deployment or cloud titan even relative to the back end. So there must be some level of confidence that they will eventually hit that size cluster on Ethernet as well? Or do you like is there maybe the question is, is there visibility that they will get to that deployment size with Ethernet in 2026?
Or is there limited visibility on that front with that InfiniBand customer at this time?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: I think the debate only really happens around timing, speed of progress, right? We are happy with the progress we’re making as a technology, happy of speed of growth of those clusters. That’s going to depend on things that are potentially outside our control. We talk about 2026, it’s a year, one years point from now. So we are extremely happy with the success.
We’re happy with our progress. How fast the customer chooses to go is will go as fast as they want to go.
Sami, Host: Okay. Maybe I’ll introduce one more nuance there in that How much of the confidence on that customer or the limited confidence in 2026 is driven by InfiniBand to Ethernet transition versus InfiniBand to Spectrum X to open system open Ethernet transition, like more multi vendor transition?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: Yes. I think that the key decisions are made that Ethernet is the answer. And not to say that any technology organization or any large customer can’t revisit those decisions on a daily, weekly, monthly basis. So again, don’t want get too far out ahead of our skis here. But no, the decisions have been made that Ethernet is the right technology, and that’s not the doubt.
Sami, Host: Okay. Maybe moving to the enterprise and neo cloud category, where you specify 25 to 30 customers versus 15 prior. So that would indicate that you’re seeing a significant step up in engagements with the sort of Tier two category of customers in that sense. Is there something in terms of timing that helped this quarter? Or should we expect sort of a similar continued ramp with these Tier two kind of category of customers?
How much more headroom also do you see on that front?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: So again, going back probably one year, one years, point we were saying that every large enterprise has to have an AI strategy, one point years ago. That AI strategy could be we’ll put an AI project into somebody else’s infrastructure. That could have been that we need to have a business model, a business plan. It could be that we spin up a technology group internally. Could spin up a technology group internally to go build a pilot.
And now what we’re seeing a year, eighteen months later is a number of organizations who are starting to progress from that discussion and conversation into pilots, trials and production. So they do range from, let’s say, enterprise customers who will be putting in a small cluster, tens, dozens, hundreds of GPUs to organizations who have got access to facilities and buildings who are now spinning up AI GPUs as a service. So they’re putting, again, relatively small clusters into many of their existing locations. Maybe they’ve got access to power and cooling and there’s some subleasing going on back to other customers. So when we talk about AI as a service, we talk about enterprises who are technology centric or technology focused, they will be starting to do AI pilots and trials now.
Some of those might have a second phase and a third phase, but they’re not going to be a multiyear rollout like a hyperscaler or a cloud titan would do. They just don’t have that scope. It’s like saying how many data centers can any organization have. If your business isn’t data centers, it’s a single digit. So you pivot that back and say, well, let’s live out these neo clouds and sovereign wealth funds.
Yes, they’re also making investments, but they have to get the funding in place and they have to get access to facilities and power. So they’re going to be that second wave or third wave, and they’re probably in the Phase one and hopefully the Phase two and Phase three with them. So they are starting later. So with the Neo Cloud perspective, they won’t necessarily be as big as the big the biggest, largest worldwide. But they are going to be significant.
Some of those names would be known to you if you’re studying this space. And then we have the enterprises, tech centric organizations and effectively Tier two service providers, Tier two hosters. So between all of that, tens to dozens to hundreds to customers who are deploying a couple of thousand GPUs in a data center. So that’s the scope of the scale. And when we say 25 to 30, that’s an estimate at the moment.
And yes, we should expect that number to grow from here to the end of the year. Incrementally, each quarter, we should be adding new opportunities, new wins.
Sami, Host: And is the like with the hyperscalers, where you had a good eighteen to twenty four month time period of pilots to production. Is that pretty similar when it comes to the smaller scale customers as well in terms of the engagement before you start to see material revenue out of that in production?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: There’s no single answer to some of those questions. In that, if it’s a relatively small deployment and there aren’t milestones and step functions, then it’s a normal transaction. Others where you’ve got phases and rollouts and milestones, then yes, you’re going to see a similar cadence of pilots trials to pilots to production. You’re going to fit in all of that.
Rod Hall, Finance, Arista: Just another one of those things where there’s a perception potentially. We’ve gotten questions from investors about whether we can be as successful with these smaller cluster sizes as we have been with the big ones. And again, we’re disclosing some of this to let people know, yes, we have we feel like we’ve got good traction, good momentum there. So some of that same type of dynamic is going on from a communication point of view, just to kind of put a little bit of background to it.
Sami, Host: Okay. Let me open it up and see if anyone in the audience has a question.
Unidentified speaker: Okay. Of product related, but
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: can you
Unidentified speaker: just walk through the pros and cons of a customer deploying kind of like a standard scheduled fabric solution like Jericho Ramon boxes versus a more traditional like Leaf Spine with Jericho and Tomahawk boxes?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: Okay. I’ll try. So you said disaggregated schedule fabrics, that’s DSF. So that architecture is productized in our 7,700 R Series, where you have two sets of fixed configuration devices. You have a edge leaf switch and you have a centralized fabric switch device.
That architecture is exactly the same architecture you have in the fully modular 7,800. So there’s no difference between the architecture, there’s no difference to the forwarding, the then life of the packet, behavior, characteristics, features. What you’re doing is you’re allowing the customer to physically position that leaf disaggregated switch next to the compute and then have a single set of connectivity to a fabric tier. So it looks like a leaf spine physically. You cable it like a leaf spine physically, but you get the benefits of a single modular chassis.
So the tipping point for going from a single modular chassis to a disaggregated solution is five seventy six ports of 800 gig or eleven fifty two ports of 400 gig because I could try and build a bigger chassis, but most people wouldn’t be able to get it through the doors of their buildings. You think I’m joking?
Rod Hall, Finance, Arista: No, I know you’re I just think it’s true.
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: You walk around our headquarters and these things are lined up like monoliths. So we could build a larger chassis, but it’s not practical. So we took that concept to the modular chassis and stretched it. So we can have 128 fabrics and we can have I forget exactly how many distributed leafs, we can scale that to more than 6,000 ports. So I’ve got my modular chassis stretching to the sky.
That’s the architecture. So then you compare that to a Tomahawk based single chip architecture. So Tomahawk is a great forwarding architecture, Tomahawk five, Tomahawk six gives me 51 terabits of switching or 102 terabits of switching in a single device. That’s great for 51.2 terabits of local IO. Once I go past that, I need two of them.
Actually, you can’t do it with two because you lose half the bandwidth. To go from one, the next stop is six. So to go from 51 terabits to 100 terabits of IO is six switches, then it’s 12, then it’s 24, and we scale this up to five twelve, ten twenty four, 2,048, it’s just a mathematical progression. It’s simple until you run out of ports. That first hop switch can have half of the IO to talk to compute and half of the IO to talk to the second tier.
When I can’t split it up into any more granularity, I need to add a third tier. So in contrast to that DSF architecture, I need to add in another layer of cables, more racks, more power, more cooling, then we can talk about conversation about optimizing for power, having LPL class optics that have that power at an optics level. So the trade offs are most of what I just said, right? How big a cluster do I want to build? What’s my future proofing?
Do I want a VOQ architecture that gives me consistency? Do I want to stay with fixed configurations devices with a higher radix but be fixed to how big I can build a two tier network and need to go to a third tier. We actually find customers deploying Tomahawk based leaf switches and Jericho based spine switches. We estimate that most customers are probably putting that kind of architecture in. Some who are scaling a little lower will be more than happy with a two tier Tomahawk only architecture.
There’s cost trade offs, there’s power trade offs, there’s other things in there. That’s quite a long answer, but I think I got most of your points.
Unidentified speaker: So it’s a rate ex argument going with DSF versus East Bind?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: So there’s breakpoints, right? I can have a single switch and then I can hit the scale limits and then maybe DSF is interest in the middle. And then there’ll be a point at which even DSF can’t scale that big. I need to go back to a three tier network or a four tier network and do data center interconnect. So DSF fits into the sweet spot at a certain size.
And then maybe some evolution of that will allow that DSF to scale more in the future. Let’s see.
Unidentified speaker: Perfect. Thank you. Just back to the tiering question, was just curious once we go past 100 ks GPUs in a single cluster, do you think we’ll need three tiers on the networking side? Or how do you see that evolving?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: There’s magic number of 100 ks. So at the Broadcom launch of Tomahawk six, they use 131,000 and something. I can’t remember the last three digits. And that was based on a two tier network of Tomahawk 6s with all the accelerators running at 200 gig. So you can get to 131 ks with a two tier of Tomahawk 6s.
If you want to get past that, you need Tomahawk seven. I didn’t just announce it, right? You need whatever follows on. You need more than 100 terabits, otherwise you can’t get past it. That’s physics.
Or you can just go to 100 gig for every compute node, that’s not what people want to do. If you want to do 400 gig, your 131 ks comes down. So I can’t answer the question about how to get past 131 ks without knowing how many GPUs you’ve got, what connectivity are. So people are then looking at data locality, moving these clusters into smaller pods, so that I can build a pod, a cluster of, let’s say, 100 ks, let’s say, six ks. And then I have four of those and I connect them together with a full mesh.
But the data locality means that I don’t necessarily have to have 200,000, 400,000 in a single cluster. You then get into failure modes, troubleshooting, operational challenges. And again, building those pods and clusters together is a technology alone. It’s kind of like data center interconnect if they’re across buildings. If it’s in a building, I’m not going to have full cross sectional bandwidth between all four pods necessarily if I don’t need it because it is a high incremental cost to put that third tier in of non blocking bandwidth.
Unidentified speaker: Just in terms of the scale up opportunity, I think the competitive advantage in scale out in general has been the combination of hardware optimization and software. As you think about scale up, do you think that one of those is more important than the other?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: So what were the two keep the mic, what were the two? So what we’ve seen over the last decade is that our relationship with our key customers has meant that they come to us for our best of breed hardware designs even if they choose not to run our software. There’s a number of reasons for that, efficiency of design, our engineering team, we actually produce lower power systems compared to something that’s identical. But above that, between the hardware and the software, there’s this hidden middle layer of intelligence, whether it’s power management, CPU management, link efficiency, link training, SerDes, identifying unknown areas before they become a problem. That middleware of value is through EOS software, but a customer that’s running their open operating system is still using that same middleware to control our hardware.
So the efficiency of the hardware design, absolutely. And then if you put two designs side by side, I know we stand behind ours for obvious reasons. But I think that, that interaction between our hardware and our middleware intelligence, That’s fundamentally everything we’ve done in the company over the last twenty years about how we program the hardware, manage the hardware, identify issues, even down to the manufacturing processes that we use to make sure the customer gets a high quality product, right? If you have one and the competitor has one, you put them side by side, maybe they behave very similarly. Get 1,000, get 10,000, get 50,000 of these things, you start to notice the differences.
So yes, ultimately in the scale up back end network, hardware design and software management of the hardware design, the quality of the traceability, all the other intelligence that we put into our products will still be an advantage.
Rod Hall, Finance, Arista: I mean the other there’s a strategic element to that, too, which is the lock NVLink provides. So that also releases that lock to some extent or makes it less strong. So that’s another part of Ethernet. But like Martin said, ’28 is more the year we would start to expect to maybe see a little bit of something happening there.
Unidentified speaker: I would just be curious to get a sense of how you think the UEC with, I think, moving a lot of the kind of routing and traffic control functionality to the NIC would affect Arista’s product strategy going forward.
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: So June, when we had our Investor Day, what do want to call it, in New York, we talked about all our hardware being UEC ready then. So the UEC, UET, Wonder O spec that came out doesn’t change the products we have, the Tomahawk five, the Tomahawk six, the Jericho 2C plus and Jericho 3s and everything in between. They’re all UEC ready or UEC compatible. So then when you come back to the question I answered before about RADIX and scale, UEC doesn’t fix how many IOs a chip has got. I still have to build this very large network.
And if it was my money I was spending on a network for AI, I’d want to go best in class, best in breed. When we think about the percentage of spend on the network infrastructure, probably more than well, I think optics are more than half of that. So I’m not going to save anything by going cheap on my network if I have to put a third tier in. That comes back to that. So then we say, well, what other advantages do these deep buffer systems have?
It’s a safety belt, belt and suspenders. I can use all the UEC features that are out there and hopefully they’re perfect and nothing ever goes wrong. But when a link fails, when an optic fails, when I, for whatever reason, have some links that are a bit variable, don’t I want the intelligence and the smarts and the buffering so that I can actually investigate, troubleshoot, remediate without just pointing at the two endpoints and going, which one of you messed it up? I want intelligence in the middle, right? A month ago, two months ago, we talked about our AIOps, right?
The EOS advantages you have for monitoring and troubleshooting within that network infrastructure. So I’m going to want to have the best network that I can get. Okay.
Sami, Host: So maybe moving to not TomOck seven, but TomOck six. And with the recent announcement of the or the launch of the chip, like what does a typical gestation period in terms of Arista working with Broadcom on a new chip and getting a product out to customers typically look like? And do you see any changes major changes from the Tomahawk V generation that would sort of have implications on market share for Arista?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: So Broadcom launched Tomahawk six, two months ago now, June, okay. I was part of the one of the launch videos that they published. So clearly, I knew about it before they launched it. I actually have one sitting on my desk at home. It’s a mechanical sample, don’t worry.
But I’ve got a Tomahawk six. And at Arista, we don’t preannounce products. We don’t tell you when the product is going to come out. We have that conversation with our customers under NDA. We’re working on joint development.
We were working on these joint development activities before Broadcom announced the chip. But we can’t physically get started on the engineering until their first samples turn up and then we get to more higher quantity than we’ve got to get to production. Broadcom has their own release process from samples to production. That will typically be anything up to a year. So you would realistically expect that our production of any product based on their silicon would align to their historical timing for samples to production.
And again, I don’t want to preannounce an Arista product, and I certainly don’t want to speak to how long Broadcom may or may not take on this version of the chip. So when we get to whatever that point is, we expect to have a variety of products designed in cooperation with the customers that we’re working with and for more general purpose markets. I think at the Tomahawk six generation, the leading edge is going to be quite out ahead of the mass markets. But we do expect to have a variety of products designed for the right customers and the right use cases. And in that scenario, we would absolutely expect to get our fair share of this market opportunity.
And then you’re probably going ask me what’s my fair share. My number one agree with your number. So yes, we are very excited about Tomahawk six. The innovation is around 800 gig or 1.6T. There are a few new interesting features in that silicon, which we’ll unleash through software, and then we’ll ship the products when we’ve completed our development and the product and the customers are ready.
What you sometimes see, and we’ve seen this with our joint developments with customers is we actually might be shipping a product, and we may not have told the public about it. It’s going to that customer for their use cases. You’ve seen us do this. There’s history of this, right? So just be careful sometimes with how you see some of these things.
Sami, Host: One of the questions we often get from investors on this front, although we haven’t really historically seen this is, why don’t customers maybe pause a bit when they know 1.6 is about to be shipped and they still continue to buy 800 gig or right now, they still continue to buy 400 gig while ramping on 800 gig. Do you expect to see at any point customers pausing or why maybe is the what is the explanation of why they don’t even though they know there’s a higher bandwidth solution coming?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: So there’s the micro answer and there’s the macro answer. So 100 gig technology is still shipping in very high volume. Why? Why isn’t everybody going to 400 gig? Well, most people don’t need 400 gig.
So if I don’t need it, why would I pay for 2x the bandwidth? It’s not the same price, but it might be the same price per bit. But if I don’t need it, why would I buy the next higher speed? So it can be driven by the use cases. Is there a use case of 1.6t?
Yes. But need optics. I need GPUs. I need NICs. I need the whole ecosystem lined up behind that.
I need to have it in my hands. I need to test it. I need to qualify. I need to start a new pilot, and then I start planning rollout and go forward. What I’m going to do to my business in the interim, put it on pause for a year, eighteen months?
What if the technology that I’m hoping for in the future slips, gets delayed? You’ve taken a significant risk. So why will people keep on deploying 800 gig? Because it’s here and it’s shipping in volume. Why are some customers still deploying 400 gig?
Because they started rolling it out on a multiyear evolution. You can’t just call time on and switch to 800 gig without another qualification cycle. So you’re going to see these overlapping waves of technology from 400 gig to 800 gig to 1.6T and 3.2T. Are coming quicker. It’s good.
Can customers afford to hit pause and wait for a year? Not often. There might be some parts of their infrastructure that they say, I have no plans to go to 800 gig over here. But over here in the corner where it’s a new technology or a new deployment, I’ll start with 800 gig. So you’re going to see these waves and say, there’s micro answers and there’s macro answers.
So we do see 800 gig growing rapidly, but 400 gig isn’t in decline. These are incremental. And 1.6T comes out with the 200 gig SerDes, we don’t see 800 gig dropping and we don’t see 400 gig dropping. And quite frankly, 100 gig is still stable based on 25 gig SerDes. So all of those different speeds can coexist in the market at the same time.
Then if you do a sum product of ports and speeds and bandwidth, the bandwidth shipping in the industry is going up year over year over year. So these are all growth opportunities.
Sami, Host: Okay. I’m going to try and rapid fire through a few questions here.
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: Okay. I’ll try and rapid fire my answers. I’ll jump around in terms
Sami, Host: of topics. So apologies for that in advance. CPO going back to scale up. How should we think about the need for Arista products to support CPO to gain opportunities and scale up? Like how critical is supporting CPO to eventually seeing share in scale up?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: So the lowest cost, lowest power connectivity for short distances is copper. So for scale up opportunities, it may not even be CPO. We’ve got no objection to CPO, COBO, LPO, NOBO, whatever you want to call these technologies. We’ve got no objection. We will do what our customers want, but we have to be convinced that it’s the right technology.
And if it is, we’ll absolutely implement it. Okay.
Sami, Host: Blue Box. You’ve talked about or at least, Jesh, you’ve talked about the Blue Box opportunity. Give us a sense of what does it look like? What how is it different from what you’re doing
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: So I think it’s more a case of us describing what we’re already doing. And that is, I can’t remember where the question came from about hardware and software and the advantages and differentiation. Blue Box is an Arista product that’s got all our engineering intent baked into it, all our engineering and manufacturing diagnostics, software, intelligence, reliability, manufacturing DNA, baked into that product. That is the Arista Blue Box. That’s a sort of rapid fire answer.
So that Blue Box is that I think you’ll hear a lot more about that at the Analyst Day and towards the end of the year. But certainly, we believe that our products differentiate in the market with or without the EOS software.
Rod Hall, Finance, Arista: We haven’t talked enough about our hardware advantage, including that middle layer that Martin talked about. And we want to talk more about that because we’re in hardware mode now. I mean the engineering challenges are just ramping so rapidly. You probably wouldn’t believe it if we were to dive into that. So we want to be a lot clearer about that advantage that Arista has.
We have people like Andy Bechtelsheim that are working super hard every day. And this is I think most people know who that is, and pretty good hardware engineer.
Sami, Host: So just to be clear, what you’re saying is it’s a hardware plus middle layer software or firmware, for the lack of a better term, Right. Minus the U. Yes. And what does the margin structure on such a product look like? Like once you take U.
S. Out, how materially does it impact your margin structure?
Rod Hall, Finance, Arista: There is a margin structure on a product like that. No, I we’re not going to talk about margin structure there. Mean we try to get paid for our value, though. We will say that. And we do have customers who will pay for that added value, and we do add value.
Sami, Host: So Maybe let me ask it another way. How different is it going to be from in a margin structure perspective from the white box companies? Where does the differentiation come in to differentiate on the margin?
Rod Hall, Finance, Arista: Well, get paid for value. And if you’re adding value in the hardware layer, in this middle layer, Martin talked about, then you get paid for that. But we aren’t going to quantify the differential between those two things. You won’t see us do that.
Sami, Host: Okay. Moving to the front end. One of the things that you mentioned, I think, on the earnings call as well, there’s definitely a pickup overall, not only in the back end investment, but in the what you’re seeing on the front end. We get this question a lot as like what’s driving the investments on the front end. Is it purely can you correlate that to investments in AI?
Or is it a non AI driver that’s now driving the upgrades on the front end? And when you’re seeing the investment on the front end, is it volume investment? Or are customers upgrading from 400 gig to 800 gig?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: So very rarely would a customer go back to a production data center, turn it off, rip out the infrastructure and replace it with 400 gig. What they do is they look at the next new deployment and say, that’s going be the new design. I’m going to put the new design in this building. So whether you had a 100 gig to 400 gig or 400 gig to 100 gig, they’re not really upgrade cycles. It’s the net new wave of the new technology.
For an enterprise customer, if I only need two data centers, then I might start an evolution and transition within one of them, and I might do that upgrade cycle. So back to your question, the growth in the front end is multifaceted. There’s definitely a pull through from the back end, and that is that as we’re seeing more and more inference, we’re seeing public reporting from customers about the impact this is having on their front end data centers and their wide area and backbone networks. A growth in traffic as we all increasingly use these AI tools and resources or if you’re an AI as a service, enterprise customers start deployment. Yes, there’s a growth in traffic on the front end.
And then for a few years, there was this rush to AI and maybe there was an underinvestments in some of the front ends. So some of that will be a little bit of catch up. So catch up, driving growth of traffic and then remediation of five year old and six year old technology, which is a technology refresh cycle that’s always there. Those are the drivers for that. We’re also seeing a little bit of in the enterprise, a little bit of repatriation of traffic that may be moved to the cloud, maybe it’s moving back from the cloud and there’s some of that going on as well.
So all of those different waves are happening at the same time. So we’re having different customer conversations about what their drivers are.
Sami, Host: Got it. Okay. Last one. A lot of focus recently on sovereign opportunities. And you had these bunch of announcements coming out of The Middle East and notable to investors has been that one of your larger deals has been mentioned and is participating, whereas Orista has been sort of visibly absent on that front.
Like how do you think about the opportunity you’re tapping into it? Is it going to be through partnerships with larger sort of partners that find their way into those announcements? Or how do you think about Arista’s position to tap into the sovereign opportunity?
Martin Hull, Vice President and General Manager of Cloud and AI Platforms, Arista: Yes. We would look to clearly partner with the technology companies that have already been announced for that. And again, we don’t necessarily announce something until it’s meaningful, real and in the rearview mirror. So don’t necessarily get carried away the fact that we may or may not been announced in any particular things. We might be involved, we might not be involved.
I wouldn’t necessarily read into the headlines that we’re not there. Sovereign wealth funds are definitely interesting in this space, and we’re fully engaged.
Sami, Host: Okay. I will wrap it up there. All right. Thank you for the time. Thank you.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.