AZTR receives NYSE delisting warning over equity requirement
On Thursday, 11 September 2025, Elastic NV (NYSE:ESTC) presented at the Piper Sandler 4th Annual Growth Frontiers Conference, offering a strategic overview of its Q1 performance. The company highlighted robust growth driven by generative AI applications and serverless architecture, while also addressing challenges in predicting consumption trends.
Key Takeaways
- Elastic reported a 20% growth in Q1, with sales-led revenue increasing by 22%.
- The company is leveraging its search engine capabilities to power generative AI applications.
- Elastic’s serverless cloud offering aims to enhance efficiency and cost control.
- Price increases in Q1 have positively impacted consumption and revenue.
- Elastic is using AI internally to boost sales automation and support efficiency.
Financial Results
- Q1 total top-line growth reached 20%.
- Sales-led revenue, excluding monthly cloud, grew by 22%.
- Operating margin was slightly below 16%.
- Price increases were implemented in both self-managed and cloud businesses, leading to immediate positive impacts on consumption and revenue.
Operational Updates
- Generative AI: Elastic has been developing vector databases since 2017, positioning itself well as the generative AI market expands. The company supports the entire developer workflow for building AI applications.
- Security: Expansion beyond SIEM into EDR and cloud security is underway. Elastic’s EDR capabilities are noted for strong malware detection, with AI enhancing threat hunting automation.
- Observability: The observability business is expanding into metrics and APM, with AI improving experiences for SREs and DevOps practitioners.
- Serverless Offering: Elastic launched a fully managed serverless cloud offering, now available on Google, AWS, and Azure. This architecture is designed for efficiency and cost control, with pricing based on gigabytes ingested and stored.
Future Outlook
- Predictability: Elastic aims to improve predictability in its consumption-based business model by developing models to track and predict usage.
- Serverless Migration: Plans are in place to simplify serverless migration, with a goal of achieving push-button migration within the next year.
- Internal AI Usage: AI is employed internally for sales automation and support assistance, with expected efficiencies potentially reducing the need for additional staff.
Q&A Highlights
- Serverless Migration: The process involves creating a data snapshot and restoring it. Elastic is developing tools to simplify and clarify this migration.
- Internal AI Usage: Generative AI is being utilized to augment support personnel, enhancing automation in sales and support functions.
For the complete details of Elastic’s presentation, refer to the full conference call transcript below.
Full transcript - Piper Sandler 4th Annual Growth Frontiers Conference:
Rob Owens, Co-Head of Tech Research: All right. Good morning, everyone. I’m Rob Owens, the Co-Head of Tech Research and focus on our Infrastructure and Security Software practice here. Thank you for joining us this morning. Really happy to have the folks from Elastic with us, Eric Prengel and Ken Exner, to talk a little bit about the story, kind of where we’re at and where we’re going, I think. Gentlemen, thank you. Thanks for coming back to the astro. Ken and I had a near-death experience getting home last year, which we were just recounting as our plane nearly struck another plane on the runway, and we spent nine hours in an airport together. Hopefully this year.
Ken Exner, Elastic: We were taking off, and we’re starting to ascend. Suddenly, the plane hit the brakes. I thought they just ran out of runway. Apparently, we almost hit another plane. We were on the news and it was a lot of fun. Yeah, I came back.
Rob Owens, Co-Head of Tech Research: Walked a half mile on the tarmac to get back. Thanks for coming back and what a flight you got out of here. Excellent. Let’s go back and chat a little bit about Q1. I think in the Mongo meeting just before you, Brent pointed out a lot of the inflection that’s happening within the space. I think it happened within your guys’ numbers if we look at revenue or subscription revenue. Parsing that, there was a price increase, which by the way does happen with software companies in there. We’d love to just kind of understand the different components of success that you guys saw, how you’re setting up for the rest of the year and maybe, Eric, a little discussion on pricing as well.
Eric Prengel, Elastic: Yeah. Do you want me to start with just the Q1?
Rob Owens, Co-Head of Tech Research: Sure.
Eric Prengel, Elastic: I’d say on Q1, we were very happy with our results. We saw both strong commitments and strong consumption. That’s really the way that we measure our business is both commitment and consumption. The total top line of the business grew 20%, which was great. We pointed to this metric, sales-led revenue, and that’s the subscription revenue excluding monthly cloud. That portion of the business grew 22%, which was a really nice growth rate for that. We also saw a really strong operating margin. We came in just below 16%. That was great, and that was really driven by a number of factors. The product has seen a tremendous amount of success. Generative AI is obviously a big driver of the product, and I think that having relevance is even more important. That’s what we provide.
As you think about Agentic AI, where it’s not just giving you an answer, but it’s actually potentially taking action, the capabilities that we bring with RAG and relevancy become even more important there. In security, we saw a lot of consolidation, and that was tremendous for us. As you see different components of the security stack, be it Elastic SIEM, XDR, cloud security starting to converge, that’s really been a benefit for us. In observability, we saw strong momentum as well, fueled by some of the AI stuff that we’re doing in security with the Elastic AI Assistant and other capabilities. Across the board, the product was strong. The go-to-market performed really well. We obviously had some changes a year ago, and we’ve seen that come back to a large degree where it’s executing nicely.
Across the board, we saw strength in Q1, and we’re really happy with how that performed.
Rob Owens, Co-Head of Tech Research: Now, it was a little bit of an easier comp given Q1 of last year and some of the different sales changes. I think that we’re seeing a lot of things accelerate here in the space from an infrastructure standpoint. Do you feel we’re nearing that tipping point? I’d love Ken to weigh in in terms of where we’re at and why.
Ken Exner, Elastic: With generative AI or more broadly?
Rob Owens, Co-Head of Tech Research: Yes.
Ken Exner, Elastic: The whole Gen AI thing happened a couple of years ago, and I think there’s been a huge amount of excitement about generative AI, and everyone’s waiting for those huge inflection points. It will happen. There will be significant growth. I think if you look at the last few big changes in our industry, like cloud computing, and if you look at the birth of the internet, even the most conservative, even the most aggressive estimates tended to be conservative. I remember when the internet came out, people were saying, oh, this is going to have like $200 billion worth of impact on the industry. Everyone’s like, oh, no, that’s crazy. That’s crazy. I think the most aggressive one I saw was a trillion. Everyone said, that’s outlandish. I think, you know, 30 years on, we realized that it’s actually had a bigger impact.
It takes time to get there, but it eventually gets to step functions. I was at AWS at the beginning of cloud computing. I was actually at AWS when we launched EC2. The first three or four years, it was kind of like the best kept secret. We’re like, why has no one paid attention to this? This is like startups were using us. It was probably like around the fourth year that people took notice and enterprises said, we need to figure out our cloud computing strategy. It was still another three or four years before cloud computing actually had an impact on Amazon’s business overall. Today, we know that cloud computing has sort of changed the space in very fundamental ways, but it took time to get there. It grows in step functions.
I think with generative AI, people realize that it’s going to have a huge transformative impact on all industries. The early couple of years have been about experimentation, has been about trying to figure out what is this going to be used for? How are people going to take advantage of this? I think a lot of the early experimentation has been around assistants and chat-based experiences and copilots. It’s just interesting. It’s good. I think as people move towards automation, this is where Agentic AI is really going to play a huge impact on all industries because it’s no longer just about an assistant. It’s no longer about Clippy. It’s about how do you automate things and sort of fundamentally change how people are doing things and drive huge productivity.
I’m excited because I think Agentic AI is sort of sparking people’s imagination about how AI can be used to fundamentally change how we work. I think as we move into the next couple of years, you’re going to see sort of huge amounts of automation driven by Agentic AI, and that Agentic AI is going to be driven by context engineering, and context engineering is at the heart of what we do.
Rob Owens, Co-Head of Tech Research: What are the parallels and what are the proof points we should be? You live this ground level at AWS. What are those proof points we should be looking for? From an investment perspective, we see cycle times compressing, right? We look at adoption of ChatGPT versus go back to previous cycles, just people using the internet. It’s crazy the momentum these things have up front, but then there’s always that expectation that carries with it, and there’s kind of that period of disappointment. To that end, if you’re drawing parallels with what you went through that first time and what we’re seeing now, what do you think those proof points are going to be then?
Ken Exner, Elastic: I think it’s going to be seeing people move from the first experimentation within a business to broader uses across. One of the things we’ve seen is a lot of the businesses that we work with prototype and experiment in one area, and then as they develop success, it starts to move across the business. That’s one. The other is the use cases start to change beyond the first couple of experimental ones. I mentioned sort of these chat-based experiences have been sort of the canonical example of using AI. Moving beyond that to other types of use cases is what I think will be the signal to me. I think this year, by coding and using AI to drive software development has sort of been the breakout use case. Over the last 12 to 18 months, it was chat.
Going forward, I’m going to be curious to see how people expand into other use cases of automation. Seeing the number of use cases expand, I think is going to be interesting.
Rob Owens, Co-Head of Tech Research: What is Elastic’s role in all this? Near term, long term?
Ken Exner, Elastic: At the heart of what we do, we’re a search engine. Why that matters in this is when you’re trying to build a generative AI application, you need to pass context to that generative AI. You need to pass context to an LLM. It’s essentially a search function. Everyone’s been hearing about vector databases over the last couple of years. We’ve been a vector database since 2017. We’ve been doing this for a long time. It’s not just being a vector database. It’s about supporting the entire workflow that a developer has in building these applications, from figuring out how they are going to ingest the data to how they are going to figure out how to chunk up that data, how to run inference on that data, or encode it to create vectors. Of course, it’s vector search or combining it with different techniques.
We have a range of different models to assist in this process, like query understanding models and re-ranking models to further get better relevance out of data. You can combine geospatial search together with vector search. Ultimately, passing that context to an agent or an LLM. You can do that a couple of different ways. You can do that through prompt engineering into a context window. You can do that through MCP interfaces. You can expose things as MCP resources or tools. All of that, that entire tool chain is what we do. Unlike a lot of vector databases that just do the, you know, they’re going to store and query vectors, we support the entire workflow.
Eric Prengel, Elastic: One thing I want to add to that that I think is important that Ken kind of touched on a little bit, as you think about this Gen AI revolution, let’s call it a revolution for now. In 2023, a lot of companies started thinking about Gen AI with the advent of ChatGPT, and it was sort of this aha moment. A lot of companies said, oh, we need to do the things to get into Gen AI, and they started chasing Gen AI. Elastic was working on vector databases in 2017. Elastic had a lot of vector capabilities. When I first joined Elastic, it was before the ChatGPT thing. I remember hearing so much about vector databases, and I’d never heard about what they were before. Hearing there was a lot that Elastic was already doing.
What I would say is rather than Elastic chasing this market, call it luck or whatever you want to, or call it preparedness, which I hope it is, Elastic was working towards this market and building out capabilities where this market is really coming to us versus us chasing the market. I think that’s something that’s important to note about Elastic relative to a lot of other companies that are in and around this space.
Rob Owens, Co-Head of Tech Research: Excellent. This seems to be translating into other areas. Maybe you can touch on the security opportunity, which seemingly is improving around your Elastic SIEM offering, even observability as well, and why customers are looking to consolidate these areas. Typically it can be one buying center, but I think historically it’s been multiple buying centers in an organization, different users and just different use cases. Why do you think you’re seeing the consolidation, the success that you’re seeing there?
Ken Exner, Elastic: First, let me explain. In addition to our core search business, we are in the security business. We got into the security business because threat hunters were using us to search through all their logs and all their different unstructured data to look for that needle in the haystack. We started being the choice of all these threat hunters for searching for information and all this data that they have in their businesses. On the observability side, same thing. People were using us as a log analytics solution to search through the mass volumes of logs that they have. Over the last few years, that’s been the core of our observability and security business, being a SIEM or a security analytics platform and being a log analytics platform. We’ve been expanding that to a number of different areas.
On the security side, we’ve been quietly building up a bunch of EDR capabilities and cloud security capabilities so that we can not just land in SIEM, but expand into EDR and expand into cloud security. Our EDR capabilities are actually some of the highest, actually the highest rated. If you look at AV comparatives, they rank us the highest rated malware and malware detection solution on the market. Same thing on the observability side, where we’ve been expanding its metrics and APM. Our motion is we land with logs, we develop a strong relationship with the business there, and we expand into these adjacent areas. That’s going very well for us.
AI is playing a huge role in this as well because what we’re able to do is we’re able to use AI to improve the experiences for threat hunters and improve the experience for SREs and DevOps practitioners by automating away a lot of the things that they do. If you think about what a typical security analyst does, they’re doing lots of pattern matching. They’re doing lots of manual work. They’re sifting through hundreds of alerts a day trying to figure out what’s going on in their business. Is this an alert that is real? Is it a false positive? How is it related to some other alert? All of that work that they’re processing can be automated away through generative AI. We’ve been using AI, our own capabilities to transform these experiences to make it much faster for these practitioners to accomplish their tasks.
I’ve been surprised at how we were leaders in deploying AI in these two spaces. I’ve been surprised that no one’s been catching up. Like when we launched Elastic Attack Discovery in security, it won best in show at RSA, and it was a very popular set of capabilities. It was basically taking all the different alerts that a practitioner gets, and it was automatically mapping an attack chain, sort of saying these are how all these alerts are related. No one has still figured out how to do that. I think a lot of the things that we’ve been able to do is because we are an AI platform. We understand how to use the context of all their data to build these types of experiences.
I think you’re seeing some momentum from people starting to pick us because we figured out how to use AI to really change these experiences.
Rob Owens, Co-Head of Tech Research: Ash loves to say security is a data problem. Can we expand on that?
Ken Exner, Elastic: If you look at what a security analyst is doing, they’re sifting through data. They have massive amounts of data that they have to pour through to figure out, you know, is there a security incident? They have to find the needle in the haystack. There’s a couple of problems. One is if you’re missing data or if you’re not collecting all the data, you’re going to be missing potential exploits. The first thing we always say is don’t throw data away. Make sure you’re keeping all the data that you, if you have all this log data, use the log data. AI can help you process it faster. The second part is historically we’ve always used manual means for how to sift through that data. The process of threat hunting was getting better and better tools so that you could do the analytics.
AI is proving that you don’t have to do that manually. You shouldn’t have to do all that correlation analysis, all that threat hunting manually. Robots are better at this. Machines are better at this. I think that’s the thing for us is that if we can deploy AI to do that threat hunting, to find that needle in the haystack and tell you how that needle in the haystack is related to this other needle in the haystack, and then to help you remediate it. That’s, I think, where we can not only help people move faster, but remediate issues.
Rob Owens, Co-Head of Tech Research: Maybe shifting gears a little bit, touch on the serverless opportunity.
Ken Exner, Elastic: Sure.
Rob Owens, Co-Head of Tech Research: Maybe help explain it to the audience to start, and where you guys are at in that journey.
Ken Exner, Elastic: This is something we’ve been talking about recently because we recently launched a serverless cloud offering and went GA on all three major cloud providers. I guess Oracle is becoming a major cloud provider.
Rob Owens, Co-Head of Tech Research: Apparently, it is.
Ken Exner, Elastic: I’m going to have to adjust my talking point there. For Google, AWS, and Azure this summer, the serverless offering is essentially a fully managed version of Elastic on cloud. If you can think of it, there are three different ways we support customers deploying Elastic. One is self-managed. You can run it yourself either on-prem or on cloud, wherever you want to. You manage it yourself. You’re just paying for a license. The second is a hosted offering. This is where we will host it for you on one of the cloud providers. We will provision instances, we will install the software, we’ll keep it patched. It’s a shared responsibility model. Customers are responsible for scaling, they’re responsible for cluster health, they’re responsible for sharding, all these other things. Our serverless offering is fully managed. You can think of it as SaaS.
It is a SaaS offering, so it is versionless. You don’t have to think about the underlying resources. It’s completely abstracted away from you. From a user experience point of view, it’s like comparing a PaaS to a SaaS, I guess you can say, which is it is a fully managed experience. The other aspect to it is it’s built on a cloud-native architecture, which allows us to take advantage of a lot of the efficiencies of cloud, including object storage. It’s a completely stateless architecture. We’ve built essentially a data lake style architecture that underpins this. This is important because it allows us to run more efficiently. It’s going to provide better margins to our business, and it allows us to control costs and lower costs for our customers as well, especially for smaller workloads where you’re paying for what you use. It’s going to be cheaper for customers.
Finally, I’ll mention we are also packaging it a little bit differently. We’re able to package a security offering for security professionals and package an observability offering for observability professionals and price it that way. If you are wanting a security solution, we have a serverless security project that is priced in a way that would make sense to you. It’s priced by gigabyte. You pay for gigabytes ingested and gigabytes stored, and that’s it. You don’t have to think about how much memory or how many CPUs you need to use for your workload. You don’t have to think about the hardware.
Rob Owens, Co-Head of Tech Research: Excellent. Eric, if I think about the last couple of years relative to Elastic, this is a story that has flirted with inflection many times. I know investors get excited. It’s been a pretty volatile business. If we look at your recent success, especially this first quarter, are you finding more predictability within the model at this point? Are you seeing more visibility relative to those opportunities, especially with the changes in go-to-market that happened a year ago?
Eric Prengel, Elastic: Yeah, it’s a great question. Just with scale inherently, you’re going to have a little bit more predictability, a little bit easier to see what’s coming. There’s also going to be things that in a consumption model can be just challenging. You talked about at the beginning the price increase, and we never got to that. In Q1, at the start of Q1, we increased prices. We’ve done that in the prior year. Price increases are something that software companies all use as a lever to help drive value. This year, we increased prices on the self-managed business as well as the cloud business. The cloud business price increase we saw immediately impact the business. We saw a lot of positivity in that because as you think about a consumption business, it’s not so straightforward as just a P times Q analysis. There are different moving pieces there.
There are constant optimizations that take place, and an increase in prices will certainly have people rethinking optimizations. Also, a part of the reason we increased the price is because there’s a lot of value that we’ve added into the platform. As you think about Elasticsearch LogsDB index mode, that’s something that significantly reduces the amount of storage that our customers are able to use and lowers their total cost of ownership. It’s similar to the Elastic Searchable Snapshots capabilities that we released a couple of years ago, which also reduced some of the costs. At the same time that we were increasing prices to customers, we were also lowering their costs with functionality that we’ve added. There were additional optimizations that happened.
All said and done, we saw an increase in our consumption from our customers, which means that their spend with us increased, which we saw as tremendously positive in a very positive sense. Just to talk a little bit, to bring that back to predictability, with things like that, the consumption is going to inherently, I think, be a little harder to predict than some of these seat-based models that are more of a traditional SaaS. You pay for your seat. As we’ve gotten bigger, as we’ve had more experience with the consumption model, I think we have gotten better at predicting it. We’ve started to build different models in our ecosystem. We’re monitoring different things to track it. I think that it’ll always be a little bit more complicated to predict a consumption model than it is some of these other models.
Rob Owens, Co-Head of Tech Research: Great. We’ve got time for one or two questions if there are. Go ahead, Ethan.
Ethan: Yeah, just on serverless, can you talk kind of about the process a customer would go through to kind of migrate an existing cloud workload or use these?
Ken Exner, Elastic: Yeah, it would be similar to if you’re moving from self-managed to cloud. The typical process is you create a snapshot of your data and then you restore it from snapshot. That said, we want to make this easier. There’s no reason we can’t do this very transparently. One of the things that we’re working on over the next, say, six months or so is making that significantly easier for customers so that it’s push button. It’ll happen in phases. We’ll come out with some tools and we’ll continue to refine it. I want to be able to, let’s say over the next year, allow a customer to just push a button and it’s just seamlessly migrated for them.
Rob Owens, Co-Head of Tech Research: I’d love to ask about eating your own dog food or finding.
Ken Exner, Elastic: Drinking our own champagne.
Rob Owens, Co-Head of Tech Research: Yeah, I like how we both just got nodding. Where are you finding efficiencies leveraging AI and some of the different tools and some of the different capabilities Elastic has?
Ken Exner, Elastic: The first thing is that we started using it to change the game for observability and security. That was the first thing. I mentioned this before. I was kind of amazed at how long it’s taken others, and they still haven’t sort of caught up in some of these spaces to use AI. Internally, we use it too. Internally, we have a sales ops team that uses this to build sales automation. We have an IT team that uses this internally. Our support team is building all kinds of support assistance using AI. The support space is ripe for a lot of generative AI. If you want to really augment a support person, you can do a lot of what they do through AI.
What we’re realizing is that you can use AI to automate a lot of the work that they go through, like trying to figure out, you know, has this customer called before for this issue? What’s the sentiment of what they’re, are they pissed off? Knowing they’re pissed off even before they get on the phone based on the previous sets of conversations. There’s all this stuff that you can do to bring a ton of information to the support engineer. We’re using all that to really create a great experience for people when they call support because we can look at all the past information and understand a lot about the context for this person.
Rob Owens, Co-Head of Tech Research: Has it resulted in tangible cost savings or lower headcount in any of these functional areas that you’re starting to use it?
Eric Prengel, Elastic: Yeah, it’s a great question. I think that we’re still looking at that. There are ways where we can be more efficient. I don’t know if we necessarily have taken heads out of the business, but we might have added less heads into the business as we think about being able to be more efficient with some of the ways that we’re using Gen AI to really drive productivity and efficiency. Like Ken said, it’s really sales, support. Even on the engineering side, we’re starting to see some of these capabilities that we’re able to leverage benefit us a ton.
Rob Owens, Co-Head of Tech Research: Great. I think we’ll end it there. Thank you, guys.
Ken Exner, Elastic: Thank you.
Eric Prengel, Elastic: Thank you.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.