S&P 500 rides Tesla rally to fresh record ahead of Fed meeting
On Thursday, 11 September 2025, DigitalOcean Holdings (NYSE:DOCN) showcased its strategic pivot towards AI inferencing at the Goldman Sachs Communicopia + Technology Conference 2025. The company emphasized the benefits of this shift, highlighting improved unit economics and alignment with its core capabilities. CEO Paddy detailed both opportunities and challenges, including the importance of customers with paying end-users for revenue stability.
Key Takeaways
- DigitalOcean is shifting focus to AI inferencing for better unit economics.
- The Scalers Plus customer segment, representing 25% of the portfolio, is growing at 35%.
- The company completed a $625 million convertible notes offering to retire 2026 notes.
- DigitalOcean aims to re-establish itself as a starting point for AI companies.
- Partnerships, particularly within the open-source community, are crucial for expansion.
Financial Results
- The Scalers Plus segment now comprises 25% of DigitalOcean’s portfolio, growing at 35%.
- Historically, 20% of revenue is invested in CapEx, with 15% for growth and 5% for maintenance.
- A $625 million convertible notes offering was completed to retire 2026 notes.
- DigitalOcean reports an EBITDA exceeding 40%.
- Revenue predictability is strong, with half of AI-native company revenue being predictable.
Operational Updates
- DigitalOcean released around 250 new features in the last year, averaging a major update almost daily.
- Infrastructure improvements include expanded Droplet options and enhanced storage and networking capabilities.
- The Gradient AI platform now offers serverless inferencing and agentic building blocks.
- Cloudways Copilot, an AI agent for website management, has achieved over 95% accuracy in issue prediction.
Future Outlook
- Continued focus on AI inferencing is expected to drive growth.
- DigitalOcean is investing in durable growth through companies with real customer traction.
- The company plans to expand partnerships with the open-source community to drive customer acquisition.
- Ongoing investment in compute, storage, networking, and database offerings is planned.
Q&A Highlights
- The strategic focus on inferencing is driven by better unit economics and alignment with DigitalOcean’s core strengths.
- Product-led growth and partnerships are key to expanding market reach.
- The demand environment remains resilient despite microeconomic challenges.
- DigitalOcean is competing with NeoClouds in the AI inferencing space, with multi-cloud inferencing becoming more common.
- The financial strategy includes investing in durable growth and AI-native companies with enterprise and consumer use cases.
For more details, refer to the full transcript below.
Full transcript - Goldman Sachs Communicopia + Technology Conference 2025:
Unidentified speaker: All right. Good morning. Thank you so much for joining us at the DigitalOcean session at our conference. Especially thanks to Paddy. Thank you for joining us.
Paddy, CEO, DigitalOcean: Thank you for having us.
Unidentified speaker: Could we get the door closed, please? Can someone close? Thank you. Paddy, I wanted to ask you about a topic that’s been very much in the news this week: post-argo and entrepreneurs. I think there is a fair amount of industry discussion on whether investing in imprint and training, training specifically, is actually a good—alright, let’s try that again. Paddy, I think that the news flow this week has been very much focused on the unit economics of training and inference. The beauty of the DigitalOcean business is you do a little bit of both and increasingly inference. How do you think about it as a CEO for DigitalOcean? How do you think about whether investing in training and inference is a durable, healthy business for the long term? Maybe take training and inference separately.
Paddy, CEO, DigitalOcean: Yeah. First of all, thank you for having us here. It’s wonderful to be here as always. It’s a great lead-off question in the sense that this has been a dominant theme for us over the last several quarters, but also this week where I’ve spent a lot of time with AI-native companies that are ramping up their footprint on us. I would say we made a bet a couple of quarters ago that we are going to be focused on inferencing for a number of strategic reasons. Number one, that’s very close to the DNA that we have had over the years. A couple of really interesting data points from a unit economics point of view. For training, it is all about GPU dollars per hour.
For inferencing, there are a lot of interesting patterns that are emerging where if you think about inferencing, it’s all about the throughput that you can get, like the flops measured, right? Within reason, within a family of GPUs, increasingly, our customers really don’t care about whether we are servicing it with, I’ll take an older example of H100s versus H200s. They’re like, OK, as long as I can get this kind of throughput, I really don’t care how you service us. It’s all about the dollar per flops versus GPU dollars per hour, which is a pretty big shift. Interestingly, there are a couple of other types of inference use cases that are emerging where I spent time with two different startups this week, both of whom have freemium business models.
For them, the free tier, they’re OK, or they want us to serve the free tier customers using an open-source model that we offer through our serverless inferencing fleet. For their more "premium" customers, they want to have a closed-source model that they have fine-tuned and they’re hosting on raw GPUs on our infrastructure. They want us to do the load balancing and the routing dynamically based on whose request is coming in. If you look at the price performance on these two types of requests that we are getting, very different. One is about 1/4 or 1/5 the cost profile of the other. From an inferencing point of view, it’s a totally different ballgame in terms of how our customers perceive the unit economics.
That’s why it is really important for them to not only have a provider that is just racking and stacking GPUs, but have a full stack agentic cloud that can do all of these things dynamically.
Unidentified speaker: Levels set our funds, the mix that you have today, are there exceptions to the rule where you will accept training workloads as a part of the migration of the customer journey?
Paddy, CEO, DigitalOcean: It’s getting a smaller and smaller part of our fleet and part of our resource allocation philosophy. That’s one of the big things we are looking at for next year’s capacity. How should we allocate it across our customer base? It’s going to be predominantly inference-based workloads for a number of reasons. One, for me personally as CEO, I look at who’s actually paying the bill, not the startup, but who’s eventually paying the bill. Is it a venture capitalist or is it actual real customers? I get super excited, obviously, by whether it is a consumer or an enterprise customer that is paying the bill because then there is more durability of revenue both for the company that we are doing business with and for us eventually.
Unidentified speaker: This is a great observation. Out of the companies that you have on DigitalOcean today, I know your visibility is not perfect. What % of them have customers paying the bill?
Paddy, CEO, DigitalOcean: We have companies that are doing, for example, generative media. They are selling to B2B customers who may have consumers at the other end of this spectrum. These B2B customers are looking at generative media as a way to increase their conversions or increase their engagement and the depth of product usage for them. These are great use cases for us because we know that this use case, if the customer is or if the startup is able to prove the workload, it is only going to go up in usage.
Unidentified speaker: The other interesting comment you made there, inference is closer to DigitalOcean’s DNA. I think you expanded on that a little bit already, but what do you mean by that?
Paddy, CEO, DigitalOcean: Yeah. There are multiple reasons why we feel inferencing is a place where we have a big right to win for a number of reasons. I’ll start with the most obvious one, which is inferencing is a lot more than just GPUs. Yes, GPUs are a big part of inferencing. When you talk about inferencing, you need the raw horsepower to leverage an LLM, whether it is a closed-source model or an open-source model or a serverless inferencing of either of these two models. More important than that is in an inferencing mode, you need to pump in some custom data. You need to pre-process the data and post-process the data. You need a way by which you can build some of the other higher order services around it. You need to have guardrails.
You need to evaluate both in real time and also offline which model is the best suited to serve the needs of your customers. I gave the example of a free customer and a freemium customer. Different types of use cases might require different parts of the same LLM even. We are now starting to see from a same LLM provider, different models are great at doing different things. How do you do that in real time? For many companies which are not super sophisticated AI natives, they also want the ability to start building agentic workflows from a template. They also want the ability to have multi-agent orchestration. You need a way to have sophisticated routing engines and traceability and observability. All of the things that we have done on the traditional cloud are very important when you’re running inferencing at scale.
For all those reasons, a lot of our customers have started, they come for the inferencing needs, but they stay because we are a full stack cloud. They start leveraging the other stuff, they don’t have to go to multiple cloud providers. The fact that in our new data center, these two stacks live side by side in an integrated fashion is a big, big deal for them.
Unidentified speaker: Turning to the broader business, you’ve had a lot of success growing your scalers plus your largest spend cohort. It’s now 25% of the portfolio and growing 35%. What are the specific product features and enhancements you’ve invested behind to accelerate this momentum? Where do you see the gaps that you need to fill to continue?
Paddy, CEO, DigitalOcean: I’ll start with a boring answer. There’s not one or two features. It’s a collection literally of about 250 features we have released over the last four quarters or so. If you look at the number of business days in any given quarter, we are releasing a major product update almost every business day. We can categorize some of these things to say, for those of you who are new to the DigitalOcean story, when I came on board about 20 months ago, one of the biggest themes that was highlighted was the fact that customers grow to a certain size of footprint on the DigitalOcean platform and they’re forced to graduate because we didn’t have certain types of functionalities. You can group those functionalities into core compute.
We had a couple of types of Droplets, but we didn’t have a lot of different flavors of them in terms of some use cases need memory-optimized Droplets. Some use cases need compute-heavy Droplets or storage-heavy Droplets. We have fixed a lot of those things. We have a variety of different Droplet options, even to the extent now we have an inference-optimized Droplet powered by GPUs. That’s on the fundamental level. Our storage was also lacking a lot of high-throughput storage, input-output, different types of network-attached storage, and things like that. That was another big hole. Our networking stack was fairly basic. That was a big area of focus for us over the last six months. We have added several features, including Virtual Private Cloud and also direct connect between our data centers and the hyperscaler data centers. That’s been a big hit because we don’t charge extra for that.
That has been a big hit with our large customers, especially scalers plus, because now we can go make a pitch to migrate a part of their existing workload. It doesn’t have to be all or nothing. Our big customers absolutely love it because they love many parts of the DigitalOcean platform, but not all parts of it. They want to have a multi-cloud strategy even within a given workload. Now they can tastefully pair us with an existing workload running on GCP or AWS. That’s another one. The final thing is it’s an evergreen area where we are investing a lot of bandwidth, which is our database offering. I think that’s going to continue over the next year or so. In a couple of weeks, we have our product conference in London. We’re going to make a series of big announcements.
It’ll be in compute, storage, networking, database, and everything in between.
Unidentified speaker: You’ve traditionally relied on a PLG motion. As you’re expanding to these larger customers, how are you leveraging partnerships like with Hugging Face or channel partnerships to grab a larger portion of the market there?
Paddy, CEO, DigitalOcean: Yeah, it’s a good question. Our product-led growth has been a major driving force from the founding days to now. Over the last couple of years, it was showing signs of fatigue, if you will. Last quarter, we talked about how we had one of the best quarters ever in terms of our month 1 to month 12 cohort. The reason why I get super excited by that is today’s M1 to M12 is tomorrow’s NDR. I obsessively look at the quality of customers coming in in terms of their ARPU and their velocity of progression through the M1 to M12 and how quickly they’re attaching other DigitalOcean products. On top of that, we have never had a proper sales-led growth motion. When we had good sales, we didn’t have great products. When we had good products, we didn’t have good sales. Now we have a good one-two punch.
We are now packing a good amount of wood behind the sales-led growth arrow. That will be a big theme for us next year. On top of it, we are starting to open new front doors through which customers can come in. One obvious one is our AI front door has been great in getting more customers attached to our traditional DigitalOcean stuff. You also mentioned a couple of other partnerships. One partnership that hasn’t gotten a lot of publicity is because we haven’t really launched it yet. We talked about it at the conference, a company called Laravel, which is the most popular PHP framework in the world. Their founder talked about how they’re launching their VPS offering exclusively on DigitalOcean. We have, I don’t know how many thousand people in the waitlist for that. We are going to be releasing that in the next couple of weeks.
We expect that to be a massive front door. Our partnership is not restricted to one or two companies, but we are looking at the open source community at large to drive a lot of traction to us. That will be an evergreen motion in terms of our partnership, both on the core cloud as well as on the AI side.
Unidentified speaker: On the AI side, can you talk to us a bit about the breakdown of the platform currently? What % of your customers are leveraging the infrastructure versus the platform and eventually the agentic? Where do you see this going over the medium term as well?
Paddy, CEO, DigitalOcean: Yeah, absolutely. I’ll maybe just do a continuation of the previous answer, which is on the AI side. The AMD developer cloud is another big front door, which is powered by DigitalOcean. We continue to expand the footprint of how companies and developers can enter the DigitalOcean family. Specifically talking about the AI stack, I think we have talked about, and we largely borrowed the inspiration from the Goldman Sachs framework of IPA, Infrastructure Platform and Agents, or applications. From our infrastructure point of view, both NVIDIA and AMD are the big GPU offerings. On top of these GPU offerings, we have a couple of layers of abstraction. Of course, we offer bare metal compute. Increasingly, a large percentage of our customers have started using our Droplet architecture.
Our Droplets are very sophisticated in what they provide in terms of taking away all the brain-damaging work of putting the right frameworks and making sure that they’re working and stuff like that, but also more sophisticated orchestration and lifecycle management of these instances, which if you worked with GPUs, it’s not very sophisticated. There’s a lot of breakage, and you have to do a lot of babysitting in terms of the lifecycle management. In addition to all of this, in the infrastructure layer, we’re also building some inference optimization logic. Both ourselves as well as working with some partners, we are building inference optimization, including the GPU inference Droplet that we created. That is an ongoing R&D effort. We are building a lot of IP on that.
The next layer is our Gradient AI platform, which starts with serverless inferencing of both closed-source as well as open-source models and also all of the other building blocks that I rattled off, all the way from a model playground to TCO calculations on different LLM throughputs to agentic building blocks of multi-agent workflows or agent evaluation, agent traceability, and so forth. All of these building blocks is what we call as Gradient AI platform. Typically, the users of these two layers are very different. AI-native startups typically want access to GPUs. SaaS applications and traditional software companies want access to serverless endpoints or the agentic framework directly because they’re not trying to take GPUs and start building from scratch. They are introducing AI as a feature into their platform. To finish my answer, most of the revenue today comes from AI infrastructure layer.
Most of the mindshare adoption and thought leadership is in the middle layer. Will that invert sometime in the future? Yes. When? I don’t know. We already have 6,000 unique customers using our platform, more than 15,000 agents deployed at this point. A lot of them are in proof of concept mode. At some point over the next few quarters, that will invert. We are really looking forward to that.
Unidentified speaker: You announced the general availability of your Cloudways copilot. Any early feedback from customers? What are you hearing on adoption?
Paddy, CEO, DigitalOcean: Yeah, it’s been a big hit. It’s just amazing. See, you have to realize our Cloudways customers, some of them are technical. Generally speaking, they are not very technical. They’re hosting websites or they’re digital agencies and things like that. They typically have shared IT resources. They’re not babysitting websites day in and day out. For them, having an agent that is doing the job of a human is a welcome addition to their fleet because they’re not there looking at the varnish cache of their WordPress deployment all day long. The more automation we can provide in terms of just the observability and monitoring of the health of their website is super welcome. The next step of actually taking remediation on stuff before it actually goes wrong is an absolute winner for them.
We are getting more than 95% accuracy in terms of our ability to predict that something is about to happen. In addition to the Cloudways copilot, we are using the same technology internally because, I mean, obviously, we have a massive cloud footprint. With any massive infrastructure at this scale, things go wrong all the time. It has reduced our time to respond and mean time to respond and mean time to remediate by 30% to 40% in most cases. We have still only opened up three or four use cases internally. The productivity gains are just staggering.
Unidentified speaker: You made a really interesting comment there. AI natives want GPU access. Traditional SaaS companies want serverless and more edge compute, I think you said. The question for you is there is this debate in the market today on whether AI native companies can disrupt traditional SaaS companies. One of the parts of the argument is, the tech stack is fundamentally different for an AI native company than the SaaS company. Would you agree with that? Is there malleability here?
Paddy, CEO, DigitalOcean: I think I’m more on the side of over time, the AI natives are going to disrupt the traditional software companies. There is a parallel stack emerging. Even in the AI native companies, and I should probably qualify this a little bit more, AI native companies that are more infrastructure-oriented need raw access to GPUs. I was talking to an AI native company that is building contact center software. They don’t want access to raw GPUs. They want serverless endpoints because they are like, yeah, but they want cheaper serverless endpoints. They don’t need access to raw GPUs. What they need are high-quality tokens in, tokens out because they’re doing voice-to-text and things like that. They want us to do the heavy lifting of, hey, I’ll give you the model, or I’ll point you to the model. You host it.
You manage the lifecycle and just give me API access to it. I feel there is a parallel observability, for example. Age-old problem. We’ve been doing it since the mainframe days. The way you do observability for a pure end-to-end agentic stack is very different. What you observe and what you take remediation on is very different from what you observe and take action on for a traditional cloud stack. I think there is a parallel stack emerging. It is also nuanced in the sense that the more sophisticated AI natives that are building raw infrastructure or doing media manipulation and those kinds of things need access to raw GPUs. AI natives that are more in the business realm or building business workflow software are tilting more towards getting access to endpoints in a serverless manner.
Unidentified speaker: Yeah, it’s fascinating. You talked about this at the beginning, how more of a mix is now customers that are actually paying for the business model as opposed to these users. This is a question about the quality of the revenue of some of the AI-native startups. To what extent are you still seeing a lot of stopping and starting, a lot of experimentation, such that you can’t measure NR the way you would typically measure NR? Are you starting to see the quality of that revenue start getting improved?
Paddy, CEO, DigitalOcean: I would say maybe about half of our revenue is getting to be predictable because of these inference workloads.
Unidentified speaker: Of the AI-native companies?
Paddy, CEO, DigitalOcean: Of the AI, yeah. The AI-native companies, we are trying to get slightly longer-term commitments from these customers because we have limited capacity in terms of our inference fleets. Part of my business development activity with these companies is, hey, give us more visibility. Can you give us six months? Can you give us 12 months or 18 months? The more mature these AI-native companies are with their inference workloads, they are willing to now give us visibility into their 12-month run rate because they know what the price performance or number of tokens required are to serve their customer. They have a prediction in terms of how that is going to look in terms of their end-user adoption and the number of tokens required to service and are able to give us some visibility in terms of what their needs are.
I think it’s still early days, but we are starting to see that for sure.
Unidentified speaker: I know we’ve debated this with Matt every quarter, the question on the demand environment, the visibility that you have as a company, because you do tend to serve more of an SMB developer-led customer base. I would love to hear your thoughts. How do you feel about the health of the demand environment for both that, you already addressed the AI-native cohort, what about the SaaS?
Paddy, CEO, DigitalOcean: I think they’ve been more resilient than I initially thought during the very turbulent April time frame. It has been fairly resilient. I would also say that it’s less about some of the macroeconomics going on globally. There’s a lot of microeconomics from a country-by-country perspective that we see. In terms of the demand environment, we are not seeing anything unusual at this point.
Unidentified speaker: How about from a competition standpoint? I really appreciate that at the beginning of this conversation, you were talking about the inference workloads being close to your DNA as a company and the unique value proposition that you have. As investors, we spend a lot of time hearing from new cloud providers that are addressing inference workloads. Maybe drive it home for us. Are you seeing a change in competition for that AI-native cohort versus the cloud?
Paddy, CEO, DigitalOcean: I think we are, I don’t know if it has really changed that much in the last six months. It’s the same names that we keep seeing. It hasn’t really changed. These are the same NeoClouds that you’re probably hearing from as well. I think there is definitely a more nuanced appreciation from our customers in terms of some of the other building blocks that they need. We are seeing a lot of companies approach us. The concept of multi-cloud inferencing is also picking up. We have many customers for whom we are not the only cloud. They may start their journey from a hyperscaler. For whatever reason, they don’t have capacity or they don’t have certain things available, and they come to different NeoClouds. They come to us.
I think the concept of going from a single cloud to multi-cloud, probably in classic cloud, took 10 years for multi-cloud to really come to fruition. In AI, it feels like it’s already there for inferencing.
Unidentified speaker: Turning to the balance sheet, you’ve historically invested around 20% of revenue in CapEx, with 15% of that towards growth and 5% towards maintenance. How are you thinking in the medium term as you look to your investor day targets to re-accelerate revenue? Is there any near-term influx that’s needed on the growth side of things to support this investment? How are you thinking about additional funding pools to do so?
Paddy, CEO, DigitalOcean: Yeah, that’s a really pertinent question given what we’re seeing. On the investor day, we said, you know, this has been our historical run rate. This is the split that we are used to. I think that’s largely still true. We are getting more and more confidence that we will not be afraid to invest behind durable growth backed by companies that are seeing real customer traction. We also talked about the fact that just like other companies in the market, even some that came out earlier this week, there are multiple tools that we will leverage as part of our tool belt to support our growth aspirations. The key thing here is to say it needs to support our growth aspirations.
If there is a way to accelerate our growth or get to our growth aspirations faster or any combination of those two things, we will absolutely invest behind it. We are starting to look at some of those things as we are picking up momentum and traction with these AI-native companies. Not to sound like a broken record, but the more true inference workloads that we can see with durability attached to them, the more conviction we will have to invest behind those workloads and those companies. Part of our mandate is to go allocate our resources from a compute perspective behind these companies that have real enterprise and consumer use cases behind them. Just by the nature of inference, it just takes out a huge piece of uncertainty behind, like, are they going to be viable in six months?
If they’re doing inference at scale with thousands of GPUs today, somebody is paying money in exchange for value. That is a big validation for us. That gives us more conviction to invest behind this.
Unidentified speaker: What does the pipeline look like for the $20 million plus multi-year type deals like the one that you announced recently?
Paddy, CEO, DigitalOcean: The pipeline looks healthy. The pipeline looks healthy. Part of it is with companies that we seeded and are finding traction. We are getting very active in the startup community with the concept of, hey, this was something that DigitalOcean was great at. You won’t believe the number of people who come up to me regularly when I’m walking around with a DigitalOcean t-shirt in an airport saying, hey, I learned how to code on Ruby on Rails on DigitalOcean. We are trying to get back to that DNA with AI companies. Even last night here, we sponsored an event called Founders You Should Know, which is a very small curated set of founders. These are really successful serial entrepreneurs. We are getting back to that route of seeding DigitalOcean as a place to start their AI journey, not just their cloud journey. I feel really good about that.
Unidentified speaker: In August, you completed an offering of the $625 million convertible notes to in part retire the 2026 notes. Can you just bridge the gap between the remaining 20% to retire and how this is going to impact DigitalOcean’s funding structure in the future?
Paddy, CEO, DigitalOcean: Yeah, I think we are in a really good place. I was just joking that I’m so glad I don’t have to take more calls on converts. So glad to have that. I think we have a little bit of stub left over. I mean, we are in excess of 40% EBITDA. We throw out a lot of cash, and it gives us a lot of optionality to do all of the things that we just talked about. We have a significant amount of runway between now and the end of the next year to take care of this. We have a very high degree of conviction that this is behind us, and we have multiple degrees of freedom to pursue.
Unidentified speaker: I wanted to ask you a little bit more about product-led growth in your LLMs. We had Canva. We had Vercel. We had HubSpot conversations recently where they both said that you’ve sort of evolved from search engine optimization to AI engine optimization. I’m curious what you’re seeing in terms of lead generation from LLM references and how you think about what product-led growth looks like under that paradigm.
Paddy, CEO, DigitalOcean: Yeah, that’s a great question. We obviously spend a lot of cycles tracking this. The movement from SEO to GEO is real, and we are seeing it every day with tweaks to the Google algorithm and things like that. It makes a huge difference for us. I’ll start by saying our M1 to M12 has never looked healthier. It’s an amazing engine for us. Our SEM spend is fairly minuscule, really small, like single-digit millions is what we spend. It’s not a big driver of our PLG motion. Our PLG motion is driven by community, our open source involvement, and organic search. Branded search is a very small part of our overall strategy. We’re also starting to get a disproportionate amount of our signups from LLMs, but it’s still early stages.
We are getting a disproportionate amount of our signups from LLMs, but their conversion rate and their ARPU is something that we are monitoring and tracking. Are they coming here to do something serious, or are they kids and students that are here? Yeah, sorry?
Unidentified speaker: Experimenting.
Paddy, CEO, DigitalOcean: Experimenting, yeah. We are looking at all of that. It’s still very early days. Even for our product-led growth motion, we have multiple front doors. The open source community is a great example where we get customers coming in from different open source frameworks into our PLG motion, and then they become super entrance customers of ours. Unlike some other companies where SEM, like even in my previous job, we used to spend like tens of millions of dollars to acquire customers through Google, that’s not the case with us here at DigitalOcean. I feel it is an important part, but it’s not the most important part of our PLG motion. We have multiple feeding points into our PLG motion. It’s a really fascinating place where we are starting to see a significant impact of the Google search algorithm and how we are bringing in customers into the funnel.
Luckily, we have multiple bites of the apple. We don’t rely on search engine marketing to drive the top of our funnel.
Unidentified speaker: Really fantastic comments. Thank you for your time.
Paddy, CEO, DigitalOcean: Thank you very much. Appreciate it.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.