Wang & Lee Group board approves 250-to-1 reverse share split
On Tuesday, 04 March 2025, Arista Networks (NYSE: ANET) presented at the Morgan Stanley Technology, Media & Telecom Conference, outlining its strategic position in the evolving networking landscape. The company highlighted its robust relationship with Meta and growth among other cloud customers, while addressing competitive challenges and opportunities in AI and cloud infrastructure. Despite some concerns about market competition, Arista expressed confidence in its revenue targets and strategic direction.
Key Takeaways
- Arista maintains a strong partnership with Meta and expects it to be a significant customer this year.
- The company is confident in achieving a $750 million AI revenue target for back-end solutions.
- Deferred revenue increased by $250 million in Q4, reaching a total of $2.79 billion.
- Arista is focused on expanding its customer base beyond cloud giants to enterprise and campus markets.
- The company sees potential opportunities arising from the HPE-Juniper transaction.
Financial Results
- Arista expects Meta to contribute over 10% of its revenue this year.
- The company reiterated its confidence in reaching a $750 million AI revenue target for back-end solutions alone.
- Arista’s growth guidance is set at approximately 17%, compared to the Street’s buy-side expectation of around 25%.
- Deferred revenue increased by $250 million in Q4, totaling $2.79 billion.
Operational Updates
- Arista completed the 7,700 R4 product, a distributed Ethernet switch co-designed with Meta.
- The company is engaged with 15 AI customers and is expanding its reach across enterprise, provider networks, and startups.
- Arista’s cloud customer base, excluding Meta, grew by approximately 30%.
Future Outlook
- Arista aims to increase its presence in the enterprise market and gain more customers within existing accounts.
- The company plans to upgrade to 800 gig on the front end of the network.
- Arista is exploring opportunities resulting from the HPE-Juniper transaction, with customers seeking discussions about potential outcomes.
Q&A Highlights
- Arista noted no significant changes in customer strategies regarding the use of OEM versus white box solutions.
- The company focuses on building an agnostic network compatible with various GPU and DPU technologies, contrasting with NVIDIA’s integrated system approach.
- Arista continues to seek new customers to balance the high capital expenditure associated with large cloud customers.
Readers are encouraged to refer to the full transcript for a more detailed understanding of Arista Networks’ strategic insights and market positioning.
Full transcript - Morgan Stanley Technology, Media & Telecom Conference:
Unidentified speaker: Given the controversy kind of coming out of the quarter around Meta’s revenue decline and revenues in 2024, let’s maybe just kind of start with the elephant in the room of just like how you see Meta as a customer and whether that relationship has changed over the last year?
John: That’s great. So the relationship with Meta still remains very strong. We completed our 7,700 R4 product. This is what we call our distributed Ethernet switch. That’s the third generation of co design products that we’ve done with Meta.
We continue to be strongly engaged with them. I think it’s important to remember, we had saw phenomenal growth at Meta in 2022 and 2023 as we went through the 400 gig cycle. And we did reiterate that we expect them to be an over 10% customer this year.
Unidentified speaker: Okay. Given the fact that your other cloud customers outside of Meta grew around 30% without current dialogue around white box gaining share or just kind of you guys potentially losing share? Like what do you think that that conversation misses?
John: Sure. I mean, first, we’re pretty excited that we were able to grow those customers outside of our traditional base that quickly. Certainly, there is high interest in AI from those customers as well. In terms of white box, we really haven’t seen a change in customers’ perception of how they use white box and where. The customers that have been using traditionally in the traditional cloud deployments, white box are continuing to do so.
Some like Meta are mixed between white box and OEM and they continue to be so. And we really don’t see white box moving down market into smaller companies because of the investment that’s required both on supply chain qualification and software to make that happen. So pretty static in terms of white box.
Unidentified speaker: Okay. Got it. And so just from a kind of software value though, like let’s just kind of reiterate kind of the differences of where you guys see your kind of value add to customers and how that has not changed versus kind of what they would need to do if they were going to use WhiteBox.
John: WhiteBox in general or WhiteBox in AI back end or combination?
Unidentified speaker: White box in general.
John: Yes, white box in general. So white box in general, you have to have an operating system. So you have to have Sonic or some other operating system to run. There have been companies in the past, Big Switch was one of them that we acquired, that was in the market of making operating systems to run on top of white box. That business model is very difficult.
If you keep in mind that networking spend in a data center might be 10%, it doesn’t run up to the volumes you have on servers where people are running Linux. So it’s really been focused on the top customers to have the wherewithal to invest in their operating systems as well as supply chain, make long term commitments to this supply chain as well as support.
Unidentified speaker: And
John: they have the self support in terms of replacement, spares, troubleshooting.
Unidentified speaker: Okay. Got it. There’s been some discussion as well about fabrics in the back end of the network that kind of diminished the value of Arista. Does this change the value proposition of Arista? Or is it again just kind of competitive noise as we’ve discussed with the past few
John: There are some differences in front end and back end. So there’s advanced routing feature sets that are used on the front end networks to interconnect data centers. In the back end, you have more of a confined environment, but the focus around there is job completion time. If I’m running an AI training, will it complete on time? Will I have to restart?
So the core value of the operating system around resiliency, the ability to observe traffic, troubleshoot what’s going on as well as have a secure environment of your software and operating system become remain critically important. And that’s the basis of the front end as well.
Unidentified speaker: Got it. You’ve made the point on the Q4 call that the network opportunity or Jayshree made the point that the network opportunity is so large, there’s room for multiple players. But in a market concerned with white box or InfiniBand or SpectrumX, Where does Arista find kind of the core value proposition? And how do you kind of continue to shoot above your weight in terms of share in the market?
John: Yes. I mean, if I look at our weight in data center and cloud, it’s actually pretty good around that as focus areas. Maybe not in terms of the overall players in terms of the market and some of the new entrants with NVIDIA. I think first being focused on the data center network and Ethernet has given us a tremendous advantage. We were involved in that movement from InfiniBand to Ethernet.
If I went back a year, there was a lot of questions about whether Ethernet would even work in these environments. I think we’ve been able to demonstrate along with our customers that Ethernet is not only very viable, but has a lot of advantages over InfiniBand. We’re the only provider that can play both in the back end and the front end networks in a substantial way. That gives us an advantage with our customers when we talk about operating expenses, the need to qualify multiple operating systems to have the ability to take products that work in the back end and the front end gives them a lot of optionality.
Unidentified speaker: Got it. I also realized we started this session a few minutes early. But I think we’re good. So the room is filled in. But so just how has confidence in the $750,000,000 AI revenue target changed since you guys introduced it at the Analyst Day in 2023?
For all of the reasons at the Analyst Day in 2023 for all of the reasons that we just kind of talked about with the trends that you’re seeing with customers?
John: Yes. I think we reiterated our confidence in the $750,000,000 number. That is back end only, just to remind people. So we didn’t include front end networks and substantially that’s around just switching as opposed to optics. We do sell some optics and cabling in, but substantially that revenue is all switching.
At the time we made that announcement, again, big questions are whether Ethernet was viable, the reality of Ethernet. I think that that’s largely been settled. Certainly, there’s going to be a coexistence with InfiniBand, but I think we’ve seen more customers come into play now that have confidence moving forward with Ethernet. We’ve moved from a trial mode into production with a number of customers. So the momentum feels good and we feel like we’ve been tracking very well to that number.
Unidentified speaker: Okay. So you’ve noted that three of the cloud customers are kind of these five trials that you had been talking about. These three cloud customers with 100,000 GPU clusters will largely ramp this year and another one in 2026 kind of with one of those trials falling out. Just what are the factors influencing the timing of those ramps and kind of the eventual revenue recognition we’ll see?
John: Sure. So, just for people who kind of weren’t following this dialogue of the three customers, we originally started with five customers we talked about, and that was in the context of Ethernet versus InfiniBand. So as we started to see a pickup in Ethernet, we talked about five customers that we were engaged with, four of them moving in the direction of Ethernet, one was using InfiniBand and was planning to remain in InfiniBand. Since that was talked about, we’ve moved through trials into production with three customers. There was some confusion, I think people thinking the 100,000 was cumulative, that’s 100,000 per those three customers.
One of the customers for reasons related to their own business model isn’t engaging in their plans as they originally put forth, nothing to do with technology or competition. And the fifth customer who had been prior been InfiniBand only is moving forward with Ethernet and we’re pretty excited about that as well.
Unidentified speaker: Okay, got it. Jayshree has noted or kind of you guys have noted a reciprocal one to one front end versus back end investment kind of opportunity when it comes to AI, just culminating in this kind of $1,500,000,000 AI target for 2025. Just as we think about deep seek or more efficient training, how does that influence kind of how you think about that front end, back end ratio?
John: That’s a great question. Yes, I think when we started down this path, we were totally focused on the back
Unidentified speaker: end opportunity because that represented new TAM. And there was
John: a question whether that I think most of our customers followed the same path, very focused on how you get the back end of the GPUs connected, but they began to learn more about the implications on the front end network, maybe the need to do snapshots. So, if you’re running a job and you’re doing training, something bad happens, you want to be able to move that data and take snapshots and move them off the network through the front end network. You have actually the user traffic that comes back and forth into the front end network, not very significant when you’re just typing, but if you’re starting to render videos or other things with AI that could become a quite substantial piece of this. Data storage is another aspect of it, interconnection between data centers to get data. All of that is adding to the need to really improve the front end networks.
It really depends on where the customer is coming from. Some had already invested in maybe a 400 gig upgrade and have some time. Other ones may have under invested in their front end networks and need to upgrade for the AI opportunity. So, it’s kind of a mix in terms of where people are, but we do see that becoming more important. Now with regards to the DeepSeq, that has the potential to really start to drive some inferencing aspects that would further enhance the front end opportunity.
We have not seen our customers really change their plans in terms of their investment strategy or projects with relation to that DeepSeek announcement to date.
Unidentified speaker: Okay, got it. As customers face power or cooling constraints affecting their ability to deploy AI at massive scale, just how is this influencing architectural changes? And are there any implications that it has for kind of overall network design?
John: It’s pretty significant. So if I look at the last generation of products we came out within the summer, the 800 gig portfolio, power and cooling just internally to build those products was very significant, an additional constraint. And that’s just going to get tougher with the next generation. Then on top of that, you have our customers that are deploying not only the network pieces, but the GPUs that come along with it. So this is creating a lot of experimentation with our customers in terms of their topology, how they’re thinking about the design and power distribution in their next generation data centers, and a lot of diversity in their thinking around those deployments.
So creating a situation where we’re working with them on very specific things that are relating to their data center details. So, and then we will see a mix of traditional data centers and these new high powered, high utilization GPU data centers moving forward that will have some diversity.
Unidentified speaker: Okay. While the vast majority of the AI networking opportunity is going to remain concentrated in a handful of clouds, many of whom are your customers, How do you look at kind of competing in the opportunities outside of the cloud titans, either with startups or sovereigns? And just is that value proposition of Arista different there?
John: It’s very similar in terms of their product needs, stability, observability, the intrinsic value of the operating system, issues with the front end networks. We have engaged in many more customers since we announced those first five. I think we talked about 15 AI customers and we’re adding to that list. And they’re all across the map, some enterprise, provider networks, some of these startups, we’re engaging with them as well.
Unidentified speaker: Okay. I’m going to move away from kind of some of the AI opportunities. I maybe wanted to take questions a little bit sooner than I normally would. So are there any questions from the audience? Tom,
Unidentified speaker: do you
Unidentified speaker: want to shout it?
Tom: Can you talk about competitive concern about white box vendors making another step higher in terms of share gains at your hyperscale customers? Can you address that? I think it’s a controversial topic for people in this
John: room. So the question I’ll repeat, you just got the mic, was about white box and competitive with white box. I think there’s been some consolidation within that white box segment and potentially some share gaining that’s got attention within the white box segment. Again, we don’t see customers varying from their high level strategy. If they were building with white box, they predominantly stayed on white box.
There are some opportunities within some of those customer sets for us where the diversity of use cases have increased and there are some OEM opportunities for us as well. And it’s been predominantly a static situation in terms of strategy from now until to where we are today with AI. We don’t see a change in the strategies from our customers around their deployment of OEM versus white box.
Unidentified speaker: Got it. Any more?
Unidentified speaker: Yes. I have two questions.
Unidentified speaker: I think the first one has to do with Jayshree’s comment on the call about saying how we appreciate all the enthusiasm, but would you want to anchor to our guidance of Street’s buy side at like 25% growth. You guys are only guiding at like 17%. I don’t think I’ve ever remembered Jayshree making that comment ever, just get people more anchored to the number one, why did you feel that was so necessary to say on the call considering that you’ve been beating by 10% or more on the top line every year? That’s my first question.
John: Yes, I appreciate that. I think she was just trying to ground expectations around what we see in front of us. I think typically, we’re very bottoms up driven. Where do we see the opportunities in front of us? What are customers deploying?
In sort of a project based methodology as opposed to looking at the numbers that somebody may have on the number of GPUs that they’re putting forward and look at what percentage we should take out. It’s a very much a bottoms up methodology. And I think she just wanted to kind of ground people in those expectations.
Unidentified speaker: And the second question I have is, I think you guys said this to Jayshree last year, I think you guys got the best product out of the market. I guess my concern is when you compete with NVIDIA, especially with like some of the NeoCloud accounts, NVIDIA doesn’t have to spend any money on marketing. You guys have to spend like 20% or something like that on marketing. They don’t have to spend anything, right? You want a GPU, throw in the networking.
In fact, if you buy the whole solution of NVIDIA, they’ll give you minus 2% rebate on the services. So I guess my question is, do you have like the highest margin in the business? NVIDIA is coming in and doesn’t have to spend 20% on sales and marketing. Like how do you combat that?
John: Sure. I’m going to answer that in two pieces. First, I want to take kind of the comment on sales and marketing. So we have two pieces of our business, right? We have maybe more than two pieces, but we have an enterprise focus go to market, which does require sales coverage, channel coverage.
And then we have the cloud basis where our sales and marketing expenses are extraordinarily low, right? So, on that piece of it, I think there’s not much of a difference on the sales and marketing perspective, but you’re correct. I mean, what we saw with InfiniBand and now with as the momentum has moved to Ethernet is NVIDIA will focus on selling a system with the GPUs, with cables, with optics, and the switch, and that’s their go to market strategy and that’s their sales motion. And we’re focused on, let’s build a network that will last you today, work with NVIDIA GPUs, maybe your own DPUs that you’re deploying, maybe AMD GPUs that might be forthcoming, and build out an agnostic network that you can service both for your front and back end. So it’s very different approaches to the market.
And I think just to reiterate, Jayshree was asked on the call, if you view them as competition, they created a huge TAM and opportunity also with AI and we’re happy to participate.
Unidentified speaker: Great question right here.
Unidentified speaker: Within an average 100,000 supercluster, what is the TAM that dedicated to AI spines versus AI leaves versus top of rack?
John: That’s a good question. I don’t think we have a good breakdown of that. And one of the reasons is the topologies are very mixed. If you went back to basic good old cloud networking circa 2010, there was a sorting out of the tiers of the network that now we all kind of know and expect. With these new topologies, people are doing very different things.
One customer might not be the same as the other in terms of their definition of top of rack and scale out. So it’s a little bit of a mixed model at this point. I don’t have one for you.
Unidentified speaker: Good question back there.
Unidentified speaker: How should we think about customer concentration and mix going forward? We saw Meta come down as a share of revenue. I think you explained that well, but the diversifying customer base in AI with the concentration on Meta and Microsoft come down over time?
John: So the concentration so a part of the concentration comes with the fact that there are just a smaller set of companies that have over millions of servers, right? So if you’re going to service the broad networking market and you want to participate in cloud and you’re going to be successful with cloud, there is going to be a customer concentration on the high end, whether it’s with a Microsoft or a Meta or one of the other large cloud titans. We do recognize that that is an important thing to work towards a broader customer base. And that’s why our enterprise business is so important to continue to gain new customers, to gain share within existing accounts. We have a large number of customers, large customers where we still believe that we’re under penetrated.
So the focus on the customer concentration is not to diminish the value of those customers, but actually find new customers and many more just to compensate for the large CapEx you see with those large customers.
Unidentified speaker: One last question. Right here.
Unidentified speaker: Yes. Actually, the difference between those switch types were already mentioned. And I guess your technological edge would be in spine like most. What would you say about the competitive landscape in this specific type of product?
John: So if I understand your questions around the difference in the competitive nature depending on the tier of the network, is that?
Unidentified speaker: True. And especially focus on the spine.
John: On the spine. So if I think about the spine, that’s where operating system richness of features and functionality becomes very important. Customers do different type of things with routing, have different topologies. So, our traditional competitor there has been Cisco. You see some other people playing in that space.
And I think that’s a position where we’ve done very well. I hope that answers your question. I’m not sure.
Unidentified speaker: I think that’s helpful. So maybe let’s just turn back. Obviously, you’ve had kind of two the M and Ms as your kind of major cloud tightened customers over the years. Just over time, there’s been prospects of could another one of these cloud titans kind of eventually become a customer or a bigger customer, and they are a customer today. Just how do you think about kind of that idea of additional opportunities with
John: methodology before there wasn’t Arista. Right. So it wasn’t it’s hard to believe now, but cloud was an underserved market that folks were going after with enterprise products or service provider products that really weren’t oriented towards their needs. As those customers have grown and there are more use cases within them, we do see opportunity. I don’t think that it will take them thinking about how they change their overall strategy, but we’re clearly engaged with them and ready to service needs and have applied our products to different situations and scenarios where their internal investment hasn’t been there if they decided not to invest in those types of opportunities.
Unidentified speaker: Okay. You talked about kind of the second business is the enterprise business. Just how are you seeing enterprises invest in AI today? And just how do you view that kind of how they’re going to invest both on premise? Or will they primarily be in the cloud?
John: Still very early days to be definitive. I think we’re seeing certain segments like financials in security, investing in AI, maybe some healthcare. And we believe that they’ll follow a similar model to what they did with cloud, where if data movement the movement of data is difficult or the aspect of that data is extremely confidential, tending to go on prem. And if something can be put in the cloud and maybe there’s not as many concerns with data movement move to a cloud piece. So we see a mixed model across what people are doing or thinking about today.
Even within the same customer, there could be a mix.
Unidentified speaker: Are there kind of productized systems that you look to say, hey, here is our AI solution? Or is it kind of a matter of this has been the risk of portfolio that has been serving you for years, it can also be used for AI? Just kind of are you targeting kind of solutions?
John: A little bit of both. We’re doing solutions work, solution testing in our solutions lab to make sure that AI can be deployed. We do tend to lead with the high performance products. So, the familiar operating system, same operating model you’re used to, really important. It’s the same operating system train that you’re on.
So if you’re qualifying for the front end network, you’ve already qualified the back end network operating system, but the physical boxes to be newer and some of the updated products that we’ve come up with.
Unidentified speaker: Okay. Just how do you think about maybe turning back to kind of the cloud opportunity, just about the 800 gig refresh opportunity on the front end of the network? Clearly, over the past few years, you guys have been doing a large 400 gig upgrade or seen a big upgrade with kind of the M and M’s. Just how should we think about 800 gig eventually in the front end?
John: Yes, that’s a great question. So the products we announced last summer were 800 gig, but very oriented towards the back end network, the new 7,800, eight hundred gig product. The line card we introduced was optimized for AI. You can imagine that there’ll be products like that that will have full routing capability, be able to use in the spine. And as that pressure increases on the back end, on the front end network, I’ve got to remember where my orientation is, in terms of performance, there will be an upgrade to 800 gig.
I think we’re kind of early in that cycle, but well prepared. And again, it’ll be things like snapshotting of data across my clusters, increased video traffic, some of the traditional drivers on the front end piece that start to drive that transition.
Unidentified speaker: Okay. Another common question that we get is co packaged optics and just how co packaged optics changes the opportunity for Arista?
John: So I might get a little nerdy here within my answer on this. This gets really
Unidentified speaker: The nerdy topic.
John: That’s a nerdy topic, yes. So I think co packaged optics came into view as people started to think about next generation data centers and how do I reduce power going back to that question you had. And can I reduce the power by moving the optics that might be on a board closer to the chip, so I can reduce the transmission, the voltages, etcetera, and burn less power? Great idea. So as that was done, there were improvements in the ASICs that allowed you to drive the optics directly without having another DSP or another retiming chip.
Well, guess what? Well, one of the downsides of co packaged optics that need to be solved is optics fail. And today, it’s quite convenient for a customer to take a failed optics out of an operating switch and put in a new optic without affecting the whole network or any of the other ports that fail. When you co package and an optic fails, you might have to send that back to Arista to send you a new switch. There’s not been a convenient way for a customer to replace those co packaged optics.
And oh, by the way, a lot of the cost is a combination of that chip and all the optics. So that’s the put and take of this. So the industry realized that all the benefits of power savings around co packaged optics could be achieved by eliminating the DSP or the retiming chips inside the optics module and coming out with LDO or linear drive optics as we call them. So that actually in the 800 gig cycle, I think, has captured a lot of the benefit of co packaged optics. Now that’s not to say as we go down the road and there are higher performance speeds coming out, and that little bit of that trace from the chip to the optics may become problematic that co package won’t become important.
And also, maybe the inherent reliability of the optics themselves improve or the ability to replace comes to effect and negate some of the challenges with the TPO approach. So we’re agnostic to that and very much following the technology trends and the development there.
Unidentified speaker: I mean, in Arista’s pedigree is this idea of kind of always being merchants or working with merchant silicon, open ecosystems. Just does that ever change as optics become more important? Or would that ever be something you guys would invest in?
John: Or do you think that, that ecosystem stays open enough? I think people want it our customers want it to stay open. I would never say never, but I think that would be the approach that we would see the cloud customers want us to take. And that ecosystem is important even today with the linear drive optics. We’ve demonstrated that it works.
It works really well, but people want to see a specification that people can build to, to assure that multi vendor will work with linear drive and there’ll be no issue. And standardizing this stuff actually sometimes is more difficult than actually building the product and making it work.
Unidentified speaker: Okay, perfect. Maybe returning back to the enterprise business, the campus has been another big area where you guys have had targets over the past few years. Just how do you better capitalize on kind of the success you’ve had in enterprise and to scaling that campus business?
John: Yes. That’s I get the question a lot of like, what’s your go to market for campus? And I think it’s important to kind of frame it as we have a go to market for enterprise. And originally, that go to market was go to market for dentist enterprise, Fortune 2,000, Global two thousand, and I have a data center product, or now I have data center and data center interconnect. We have added over the last five years a significant amount of new capability into that enterprise portfolio, including campus products.
And we initially started with a value proposition that, hey, you like EOS, you like it in the data center, it’s highly reliable and it doesn’t have some of the security implications that you’ve seen with other operating systems. Here’s a convenient way to add campus products into that. Subsequently, I think we’ve built a very unique value proposition around those campus products related to security, being able to segment your campus and make sure your traffic is secure or add security with network detection of anomalies into that capability. We’ve also broadened out the product set to hit more price points. In the last year and a half, we’ve added the capability not just to route between data centers, but to route out to the edge, and I think that’s been an area where we’ve improved the portfolio.
So we’ll go to market with an entire data center portfolio or entire enterprise portfolio. And a customer at any given time may have one or two opportunities. This year, I’m refreshing my access points or I’m refreshing my campus POA or I want a full redone of my MPLS backbone to EVPN. So that’s how we’ll go after it and address it.
Unidentified speaker: Okay. Clearly, a lot of disruption in the market right now with kind of HPE Juniper having their pending transaction. Just is that creating more opportunity for you guys on the campus side? Or how are you taking advantage of it on the routing side as well?
John: Yes. I think I’ll speak kind of to the enterprise piece and what’s happening. I think there were a number of customers that had Juniper and or HP as alternate sources or complementary sources. And those customers have come to us for discussions. If this combines, what will happen?
And of course, there’s some uncertainty with it now. But we’re the only one that can step into that arena and really play from campus and world class data center product and really be a competitive portfolio for enterprise networks vis a vis the incumbent.
Unidentified speaker: Okay. I mean, is there any special initiatives you guys have to kind of go after those customers just given the amount of disruption or identifying who are these kind of sources or has it been a lot more inbound?
John: I think it’s been a combination. I think our teams kind of knew where some of those opportunities landed and some have come to us.
Unidentified speaker: Okay. And then maybe the deferred revenue balance has grown $250,000,000 sequentially in Q4 to $2,790,000,000 kind of significant increases in service and product deferrals. Just how should investors think about kind of the timing of that revenue recognition or just how to read what the messaging should be of kind of the overall increase we’re seeing in that revenue balance?
John: Sure. So backdrop of that is set of new products introduced last year on 800 gig for AI. At the same time, you have customers building out new types of networks for a whole new use case. So it’s the combination of product and use cases that generates a customer to look at acceptance criteria in recognizing that revenue. And because of these AI deployments, the cycle of those have gotten longer than they traditionally were.
So that’s what’s generated the deferred revenue piece. When it comes out, etcetera, I think at any given time if the number’s constant, there’s always things coming out and things coming in. So it’s difficult for us to guide on what the arc of that deferred revenue looks like.
Unidentified speaker: Okay. All right. I’m going to open up one last time for questions. Since we started five minutes early. We’re going to end five minutes early.
But any last questions? All right. Perfect. All right. Well, John, thanks so much for being here today.
John: Thanks for having us.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.