Fed Governor Adriana Kugler to resign
On Wednesday, 04 June 2025, Dynatrace (NYSE:DT) presented at the 45th Annual William Blair Growth Stock Conference, showcasing its strategic position in the observability market. CEO Rick McConnell and CFO Jim Benson highlighted the company’s strengths in AI-powered solutions and its measured approach to market challenges, such as cautious customer spending and extended deal cycles.
Key Takeaways
- Dynatrace’s market opportunity in observability and application security is valued at $65 billion.
- The company reported an ARR of $1.7 billion with a subscription revenue growth of 20% last quarter.
- Dynatrace’s Platform Subscription accounts for 60% of ARR, with higher consumption and retention rates.
- The company’s focus on large, complex environments provides significant cost-saving opportunities for customers.
- Future plans include developing fully autonomous systems using generative AI.
Financial Results
- ARR reached $1.7 billion, reflecting strong market demand.
- Subscription revenue grew by 20% in the last quarter, indicating robust customer engagement.
- Operating margin stood at 29%, while pre-tax free cash flow was 32%.
- Annualized free cash flow is expected to be approximately $500 million or more.
Operational Updates
- Dynatrace focuses on large, complex environments, offering tool consolidation and cost savings.
- The Dynatrace Platform Subscription (DPS) model now represents 60% of ARR, enhancing platform usage and net revenue retention.
- New product areas, such as logs, are gaining traction, with increased partner-influenced deals.
Future Outlook
- Dynatrace aims to develop autonomous systems capable of auto-remediation using Agentic AI.
- The company anticipates longer deal cycles due to the dynamic macroeconomic environment.
- A more comprehensive guidance update is expected after the first half of the year.
Q&A Highlights
- Dynatrace competes in a large market by focusing on tool consolidation and simplifying complex environments.
- The integration of generative AI increases the need for observability and enhances the Dynatrace platform.
- Despite a dynamic macro environment, the observability market remains resilient, with cautious yet ongoing customer spending.
The full transcript of the conference call provides further insights into Dynatrace’s strategic plans and market positioning.
Full transcript - 45th Annual William Blair Growth Stock Conference:
Jake Roberge, Research Analyst, William Blair: Good to kick off? All right. Well, thanks, everyone, for joining here in person or listening over the webcast. Before we begin, my name is Jake Roberge. I am the research analyst here at William Blair that covers Dynatrace.
And for a full list of our research disclosures, please visit our website at williamblair.com. Well, with that, really excited to have Rick McConnell, the chief executive officer of Dynatrace, Jim Benson, the chief financial officer of Dynatrace here with us today. So thanks for coming.
Jim Benson, Chief Financial Officer, Dynatrace: Appreciate Thanks for having us.
Jake Roberge, Research Analyst, William Blair: And before we jump into the fireside chat, Rick is actually gonna start off with a little bit of a presentation just to level set the room for those that might be newer to the story of what Dynatrace does, the market they’re attacking, and we’ll we’ll jump into the fireside chat. So, Rick, I’ll turn it over to you.
Rick McConnell, Chief Executive Officer, Dynatrace: Alright. Thanks very much, Jake. Good morning, everybody. So to begin, it will come as no surprise to any of you that the world runs on software. And software these days, as a result of that, has to be always available, reliable, has to be secure, has to deliver an exceptional user experience.
And observability is essentially the category of software that enables this to occur. Now observability has really gone through multiple levels, multiple phases. The first phase was really what we refer to as monitoring. Monitoring is really largely about dashboards. So dashboards, you might imagine, give you status code as to whether your software is working or not.
So red, yellow, green. Is it working great? Is it not working at all? Etcetera. The challenge with dashboards is that they tell you that it’s not working, but they don’t tell you what’s wrong with it.
And by not knowing what’s wrong with it, you don’t know actually how to fix it. Now, it turns out that many of our competitors, and really the state of the art today in much of the observability community, is still dashboards. It still is red, yellow, green. But observability has really moved to the next phase. And that next phase is really oriented at a much more intelligent set of systems that are constantly evaluating your software environment and using, in the case of Dynatrace AI, not for the last year and a half, but for the last more than decade, to analyze billions of interconnected data points, to tell you not just that something is wrong or where something might be wrong, but rather where is it wrong, precisely why is it wrong, and therefore, how do you fix it.
And maybe what’s so exciting to me about observability as we look ahead is this notion that the next stage is really moving us into a world of autonomous systems using agentic AI to not only give you very precise insights, very precise answers, but rather taking you to the next level of actually fixing those issues on its own and essentially going through a step of auto remediation where AgenTek AI can evaluate what happened and then actually solve the problem for you. And so this is the evolution from monitoring to observability to AgenTik AI that, that is the journey that we’re on in observability. Now, the result of this is a very large and rapidly growing market. We see the observability space itself is more than $50,000,000,000 the application security component of that, around $14,000,000,000 for a combined total of around $65,000,000,000. So very large market, and you can imagine this because, as I said at the outset, the world runs on software.
And that software has to be operational. Now why is this getting harder? Why is it getting harder to manage software? I walked into a large oil and gas company a while ago, and the principal of a large oil and gas company down in Houston took me into their network operations center. Hundreds of people staring at hundreds of screens monitoring their software.
And his comment to me was, Rick, this is what you need to help me get rid of. Why? It’s because hundreds of people staring at hundreds of screens to run thousands of applications was unsustainable. Simply can’t take an organization that is, in this case, hundreds of billions of dollars of revenue and be staring at screens, seeing something go red, and then evaluating, okay, it’s red, who do I call, what do I do next? And what you see is you see a very lengthy triage pattern or problem that you’re trying to solve, where you first have to figure out who to call, then they have to get on it, then they have to evaluate root cause, and then once they get to root cause, they have to figure out what to do about it next.
Very complicated, very lengthy, and in the meantime, you can have systems that are down. Well, if you look at all of our usage of software as individuals, as end users, we expect a user experience that works perfectly. We need to find the gate to our airplane. We need to be able to buy product online through e commerce. We need to be able to watch media streams.
Whatever we need to be able to do financial banking, we expect it to work perfectly then and there every single time. And if that system is down, then we have a major issue, or I should say, the organization has a major issue, and that organization’s major issue is that their software is not working and their end users are having a challenge delivering the user experience or doing what they expect to be doing. This problem is made much worse by the cloud. Now clearly hyperscalers are exploding, about $250,000,000,000 through AWS, Azure, GCP these days of overall revenue on an annualized basis, growing in the mid-20s. It is accelerating the delivery of software.
And it’s a good thing because for organizations, it’s making it easier to deliver software. But the problem is it is creating a massive explosion of data and an incredible increase in software’s complexity. And the result of that is more and more, you have these billions of data points that you need to be able to arbitrate and arbitrate rapidly through insights coming out of a sophisticated observability system. And so the cloud is creating fragmented data at enormous scale that needs to be processed, and that is what observability is all about. Now Dynatrace, we think of as the leading AI powered observability platform.
And this is important because of all of what I’ve said heretofore. You cannot process this data manually. You cannot get the number of people needed to address these kinds of issues. So what you need is really two things. You need, number one, a sophisticated system that is analyzing the data to create insights, and then secondly, to enable those insights to be actionable so that you can then immediately resolve these issues as you look at.
Moreover, it isn’t just about technical understanding of your business. It is also about what we refer to increasingly as business observability. And business observability is not just how your software is running, but how your business is running. I was in The Middle East meeting with the largest customer we have there, very large bank in Saudi Arabia. And the CTO with whom I was meeting said, Rick, the CEO wants Dynatrace on his desktop.
And this was a major evolution because usually we sell to AI ops, we sell to IT, we sell to developers, we’ll sell to platform engineering. But what we’re seeing is whether it’s airlines, financial services, healthcare, travel, you name it, we are seeing a migration toward organizations really wanting to better understand their businesses themselves. And it is this business observability that I think is the next foray of of observability as well. Now the other evolution of observability is toward completely integrated platforms and systems. And at Dynatrace, this is specifically what we’ve done.
So we look at not just application monitoring, but application infrastructure, real user monitoring, log management and monitoring. So there are multiple different segments which we put together, which then provide a completely integrated perspective of your overall monitoring environment. And in those early days that I talked about in early days of monitoring and dashboards, what happened was that you might have multiple different vendors. You might have a vendor that would handle applications, a vendor that would have handle your infrastructure, another one for end user monitoring, another one for logs, etcetera. That is disappearing.
One of the reasons for that is because it is an oversight against all of those components that gives you the most comprehensive view as to what’s happening in your environment. If you have to piece together all of those insights independently, then it makes it much, much more difficult to be able to provide those insights needed. And so having all of those insights together and combined is a very sensible approach because then you have them all in one place. And that’s what we do with this platform. So as we look at the Dynatrace difference, we see it as these bottom three elements on this chart.
Number one, we have a completely integrated data store, which we call Grail. What Grail does is it stores all observability data types in our vernacular. It’s logs, traces, metrics, real user data, etcetera. All of these data types in a single data store. This is important because we are able to manage all of those data types and keep them together in context of one another, which provides the best insights associated with your business overall.
Secondly, we have a very sophisticated AI system that, as I say, is not something that we conjured up in the last year or eighteen months or even two years since generative AI became became so prevalent in our society, Something that we’ve been evolving for over a decade. And it consists of multiple different techniques of AI. One is causal AI. Causal AI is focused on root cause analysis, precisely what happened in your environment based upon that data, based on those insights. Secondly, predictive AI.
Predictive AI is oriented to then applying anomaly detection machine learning on top of causal AI to anticipate where issues are going going to occur so that you can then address them before issues begin. Talked about this notion of software working perfectly. Software can’t work perfectly if it breaks, and then you have to fix it. Software can only work perfectly if you anticipate it in advance and fix it before something happens. And then third and finally, is about generative AI.
And generative AI provides a natural language interface into the overall platform to be able to bring the Dynatrace platform and its insights to a much wider array of individuals. And by doing so, you can then accelerate the insights that come out of the Dynatrace observability platform. And then finally, it’s about automation. As I said at the outset, as we head into a world of agentic AI, what our companies, what our customers want is they want a fully autonomous system that can auto remediate. Let me give you an example.
We had a large customer have a large customer in British Telecom. They began with on the order of 16 observability tools, none of them particularly connected, and the result of it was a set of insights that were hard to piece together. They bring in Dynatrace, consolidate down these tools, integrate the data stores, accelerate the insights that they get out of those systems, and then can begin to automate the results based on those insights. The result of this was a 50% reduction, five-zero percent reduction in incidents and a 90% reduction in mean time to respond or recover. Those stats are enormous.
Imagine huge companies, huge organizations that can reduce their number of incidents by 50% and then reduce the overall amount of time it takes to resolve incidents by 90%. This is huge. They estimated the cost savings associated with this at £28,000,000 over a three year span. And this is precisely what we see from our largest customers. We see them reducing incidents, reducing MTTR, as it’s called, and saving a substantial amount of money, not to mention the fact that they are able to deliver a much better user experience as a result of having software that works better, that is more available, more reliable, and that is more performant, generating, therefore, a better user experience.
If you look at various handlers reports, Gartner, Forrester, GigaOhm, ISG and others, we almost always, I’d say always, I think, are in the upper right quadrant or equivalent of leaders in the space. And the reason is because we deliver these kinds of answers, not just data, that enables software to work better. We target the global 15,000. We do this because we do have a sophisticated system and it enables the best analytics based on the broadest array of data. And you get the most data out of the largest companies, and so we tend to focus there.
But we do sell to a wide array of personas, not just AI ops, but we sell to executives, we sell to platform engineering, we’ll sell to SRE teams, we’ll sell to developers. A multitude of different personas are interested in observability data and its insights. And then finally, before, we go into our Q and A with Jake, this is a bit of a Dynatrace in terms of our financials at a glance. About a billion 7 in overall ARR. We don’t lose customers very often, very rare, so we operate at gross retentions in the mid-90s.
Last quarter, we grew this business at 20% in subscription revenue with 29% operating margin, 32% in pretax free cash flow. It is a very, very healthy business, growing rapidly with customers who very much are on board with leveraging the value of Dynatrace and observability. So to sum up, very large and growing TAM and market, exceptional set of financials, a very healthy enterprise spending off about a half a billion dollars or more of free cash flow on an annualized basis. We have an incredible observability platform that delights and delivers extraordinary value to customers. And, I’m biased, but we have a great team, a great leadership team, and a great company of people delivering it.
We are motivated, passionate, and focused on delivering extraordinary customer value as the world moves ahead with its criticality of software as we look
Jake Roberge, Research Analyst, William Blair: to the future. Jay, back to you. Thanks, Rick. Really appreciate that. A really helpful overview to kind of set the stage here.
I guess just to kick things off, based on a common feedback point that I get is Dynatrace, great company, operating a large market, but there are other large players in there. So maybe, Jim, I’ll throw this over to you since Rick just did the presentation, let him
Rick McConnell, Chief Executive Officer, Dynatrace: take a Yeah.
Jake Roberge, Research Analyst, William Blair: But I guess, first of all, how do you look to compete in that type of market where there’s other large players? And then second of all, there’s been a lot of acquisitions in the observability space with Splunk, New Relic, Sumo Logic. Has competition changed at all over the last few years as a result of those acquisitions?
Jim Benson, Chief Financial Officer, Dynatrace: Yes. So I think I’d start with I think the fact that you have a lot of players, a lot of the things that Rick talked about, this is a very large spend area. And so it’s not surprising that you have multiple players. I think it actually is to our benefit. Some of the things that Rick talked about are really playing as a kind of a continuing trend.
We talked about it maybe eighteen months ago, this notion of companies dealing with tool sprawl for the very reasons that you outlined that there are different divisions, different departments all using their own tools, very difficult to manage. Rick talked a little bit about using the BT example. That’s becoming more a theme where customers just can’t deal with that anymore. So we tend to focus on very large complex environments because that’s where we thrive. So we are in a great position, one with the architecture of the platform, All the things that Rick talked about being the platform being unified, the platform being AI powered and enabled, these are all things that allow tools to be consolidated.
You consolidate tools, you have capabilities now that you can save a customer money on software costs and you can save a customer money in the way they’re running their IT operations. And so I’d say the environment, yes, is a bunch of players. I’d say we are in a really good position because what’s happening in the broader market is a theme towards consolidation, simplification and vendors that can integrate things and allow customers to have a better experience overall. And so I think what you’re seeing is we’re benefiting from that. And we’ll get into it a little bit.
We’ve done some things on the go to market side to better go on the offensive to capitalize on that. And we’ve done some things on the product and packaging side to allow customers to better leverage the platform they have in the past.
Jake Roberge, Research Analyst, William Blair: Yes, that’s helpful. And then Rick, back over to you thinking about Gen AI, obviously a big topic in software land these days. But how do you see Gen AI impacting Dynatrace and maybe bifurcate it between both the workload perspective where obviously GenAI is just another large workload moving to the cloud as well as what you can do with GenAI in the platform from an agentic perspective and a product monetization perspective?
Rick McConnell, Chief Executive Officer, Dynatrace: Yeah. I think that’s, Jake, precisely how I would bifurcate it. On the one hand, you have AI observability workloads. AI is being used increasingly by organizations, and as they use AI, that’s generating actually more software.
Jim Benson, Chief Financial Officer, Dynatrace: Yeah.
Rick McConnell, Chief Executive Officer, Dynatrace: So I talked about explosion of data, increasing complexity, and more and more software being being developed more rapidly in the cloud. Well, AI is further accelerating the rate of development of software, which is making the problem even worse and making the resource constraints, even that much more difficult as well. So AI from an AI observability workload perspective is actually generating an increased need for observability and further heightening the evolution of the market. And our solution works to oversee and manage those AI observability workloads in the same way that we would oversee any other software workloads. Then secondly, it is about using AI in our platform and extending and evolving that platform to not just use causal predictive generative AI, as I discussed earlier, but also evolving it to use agentic AI to then to take the insights and resolve those issues using AgenTik AI.
One thing that is critical about this is really the differentiation of Dynatrace in that in order to take action on insights. In order to take action on your observability data, you actually have to be sure you know what the problem is. So as I mentioned earlier, a lot of other organizations will provide correlations. They provide, I’d say, educated guesses as to where issues are. Because of the fact that we have Grail common data store fully fleshed out, it enables you to deliver insights using our AI engine that are deterministic.
You can count on them. And by being trustworthy, you can then act on them through AgenTik AI. And that’s really the evolution of how we’re using AI in our platform.
Jake Roberge, Research Analyst, William Blair: Okay, that’s helpful. And then just shifting over to the macro environment, it’s obviously been a volatile macro over the past few months and even over the past year or so. So I’d be curious what you’re hearing from customers on the ground and then maybe how that potentially impacts some of those. You talked about a lot of deals are trending towards these platform consolidation deals where you might be consolidating 10 or 15 different point solutions on the one platform. So how does this more variable macro environment impact those larger transformative deals?
Jim Benson, Chief Financial Officer, Dynatrace: Well, I think I can take that. So certainly there is no denying that the environment is dynamic. It seems to be dynamic daily and weekly. Having said that, the observability market is pretty resilient. And so within that, it’s a resilient area for all the reasons that Rick outlined, because the underpinnings when you think about pretty much any industry, the underpinnings of any industry, even industries that you don’t think of as being technology industries, software is kind of the core of what operates a lot of these industries.
And so it’s critical to have observability tools in place that will allow you to manage your environments. Now, the benefit in dynamic environments is for companies that can help you save money. And I think that’s why there’s a theme also of tool sprawl, but also if I can consolidate tools, I can likely save money. And I can also have my environment run more effectively. And so, is that even though we’re in a control where you can control world, we actually think the area that we’re in with observability is pretty resilient, number one.
And then number two, we actually offer something that’s differentiated that’s going to allow customers to save costs, which is important in the environment that we’re in. People are looking for areas that they can drive more cost out.
Jake Roberge, Research Analyst, William Blair: Okay. That’s helpful. And then sticking with Jim, you talk about your guidance philosophy for this year? I mean, obviously, last year, if we flash back a year ago, you’ve set kind of the expectation that because you were going through a go to market transition, you wouldn’t be raising the guidance until after the first half of the year and you got more visibility. So given the variable macro environment, but on the other side, maybe a more stable go to market this time around, how are you thinking about guidance and the pace of that throughout the year?
Jim Benson, Chief Financial Officer, Dynatrace: It’s a good question. I mean, as you know, I’ll start with we always we manage the business in a very measured way. And so that hasn’t changed. So that’s kind of been a basic that we’ve done all along. So we want to ensure with guidance that we’re factoring in what we know and that we’re delivering a level of the term I use prudence, which is conviction that we can execute against kind of the parameters that we’ve set.
You’re absolutely right that in a dynamic environment, there’s tailwinds and there’s headwinds, right? The tailwinds are one, we have a sales model that is now twelve months in its maturity around what we put in place for fiscal twenty twenty five. Tailwinds are new product areas that we’re getting accelerated traction in logs probably most notably. Tailwinds are tractions in the partner community where more deals are actually being influenced by partners than ever before. So, there’s a lot of tailwinds in the business.
And then you have the headwind side, which is customers in an environment that is a bit dynamic. They are cautious. They’re still spending money, but sometimes deals take longer. So relative to guidance, what we’ve done is we’ve built in a level of thought process that says deals will get done, but it might take a little bit longer. And so we factored into the guidance an expectation that deal cycles might be somewhat elongated.
I can tell you to end it with it, the pipeline trends very, very strong. So you say, well, what kind of data points from a leading indicator you see right now? Well, since Liberation Day, I’d say we’ve seen no change in our pipeline. Pipeline growth in health is unchanged. Close rates really are unchanged here in the near term.
Having said that, a lot can change and so we’ve tried to make sure that we can evaluate that and relative to increasing or changing guidance as always, I think I said last year, 20% of our year starts in the first quarter. So you get 80% thereafter. We’re not going to know a lot more after Q1. So it’s more likely an update. While we’ll evaluate after Q1, it’s more likely we’ll provide a more fulsome update after the first half.
Jake Roberge, Research Analyst, William Blair: Okay, that’s really helpful. And then shifting over to DPS, you’ve obviously seen really good adoption of DPS over the last year or two, now 60% of ARR on that new pricing model. Can you talk about the early benefits that you’re seeing with that transition and how you see it progressing over the next few years?
Jim Benson, Chief Financial Officer, Dynatrace: Well, from the get go, we talked about DPS when we first launched it, which is basically we’ve been had it for 20 for two years now. So Q1 twenty twenty four is when we launched the GA. So you’re right, the stats are 40% of your customers, 60% of your ARR. The whole thesis was a SKU based model required a sale every time. So if you sold, say application performance monitoring or full stack monitoring or some suite of offerings to a customer and they wanted to try something new, they wanted to try logs, they wanted to try application security, it was a sales cycle.
So it was a pain point for customers, they loved the products, they didn’t like the buying experience. And so the whole premise of DPS was give them full access to the platform with a rate card. So you commit to a term, whether most of them are three years, could be one year though, commit to a term, you commit to more dollars, you get better unit price, commit to less dollars. So the premise of that was always that, hey, if we do this and customers are getting value, they’ll consume more of the platform, they’ll add more workloads, they’ll trial new things and we’ve seen that and we’ve provided some statistics in our Q4 call where customers that are on a DPS platform or contract versus SKU based, they leverage on average 12 capabilities in the platform versus five SKU. They consume at 2x the rate of the SKU based customer.
They have much higher NRR. And so early in the journey, I was very honest about, hey, maybe there’s some sampling bias because you have large customers that maybe would have purchased anyways. When you have 60% of your ARR, there’s no longer sampling bias. There is just a behavior where you get them on the platform and now you can drive more adoption teams to accelerate your penetration within a customer and we’ve seen great traction with that and we expect we’ll continue to see more than that.
Jake Roberge, Research Analyst, William Blair: Yes. So good stats there, 60% of ARR, 12 products versus five products expanding at twice the rate and probably why Logs is benefiting so much as well. But we’re up on time, so I’ll stop it there. But thanks Rick, thanks Jim. For those that want to dig deeper into the story as well, we’re going to have a thirty minute breakout session up in Mayher and that starts in about ten minutes.
Rick McConnell, Chief Executive Officer, Dynatrace: Thanks all.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.