Elastic at Goldman Sachs Conference: AI and Growth Vision

Published 09/09/2025, 00:12
Elastic at Goldman Sachs Conference: AI and Growth Vision

On Monday, 08 September 2025, Elastic NV (NYSE:ESTC) participated in the Goldman Sachs Communicopia + Technology Conference 2025. The company highlighted its strategic vision, focusing on artificial intelligence (AI) and data retrieval, while addressing recent financial performance and operational strategies. The discussion was led by CEO Ashutosh Kulkarni and CFO Navam Welihinda, who provided insights into Elastic’s growth trajectory and market positioning.

Key Takeaways

  • Elastic aims to become a leading platform for data retrieval and context engineering in AI applications.
  • Strong Q1 performance was driven by effective go-to-market (GTM) and R&D strategies.
  • Price increases reflect platform innovation, with a focus on delivering value to customers.
  • Internal AI deployments are enhancing customer success and marketing, despite some lag in finance.
  • Elastic’s platform approach is expected to drive long-term growth and profitability.

Financial Results

  • Q1 performance was strong, attributed to successful GTM and R&D initiatives.
  • Price increases were introduced to reflect the platform’s added value, seen as a "durable lift to the floor."
  • Strong consumption and commitment metrics in Q1 bolster confidence for the full year.
  • The company’s full-year guidance was raised, indicating confidence in business strength.

Operational Updates

  • Elastic’s GTM strategy focuses on enterprise and mid-market segments, with fewer accounts per representative.
  • A dedicated greenfield hunting motion targets organizations using the free version of Elasticsearch.
  • Internal AI deployments include the use of "Elastic GPT" for customer success and marketing.
  • A 40% case deflection rate in customer support is achieved through AI, with hopes for further improvement.

Future Outlook

  • Elastic sees massive opportunities in retrieval and context engineering as AI applications grow.
  • The company focuses on embedding its platform across various use cases, including APM, observability, and security.
  • Continued innovation and differentiation in AI capabilities are expected to drive top-line growth.

Q&A Highlights

  • CEO Ashutosh Kulkarni dismissed concerns that larger context windows in AI models would negate the need for real-time data retrieval.
  • The importance of accuracy over speed in data retrieval was emphasized.
  • CFO Navam Welihinda explained that price increases are justified by the platform’s value and customer commitments.

Readers interested in more detailed insights can refer to the full transcript below.

Full transcript - Goldman Sachs Communicopia + Technology Conference 2025:

Cash, Interviewer: How is everybody doing? It’s just day one of the conference, right? I mean, it’s a four-day conference, and you’re going to hear a lot about tech, software, AI. By the way, software is not dead. We’re going to talk about that. Ash has been at the helm as CEO of Elastic for a few years now. I had the pleasure of meeting him some three years ago. We have an addition to the executive team, Navam, who many of you will know from HashiCorp, very experienced executive. Ash, great to have you back. I think it’s the fourth Communicopia and Technology Conference we’re doing together. Really excited to have you here. I know you’ve been through a few conferences, and people ask me, for those that are not familiar with Elastic, can you please tell us your story? I’m not going to ask you that question.

What I’m going to ask you is the same question I’ve been asking you in 2022, 2023, 2024. What is your vision for the company? What does success look like in four to five years?

Ashutosh Kulkarni, CEO, Elastic: OK, the most important thing for me as I think about Elastic is what we are great at is search. That’s our core bread and butter. When it comes to unstructured information, unstructured data of any sort, the messier the data, the better we are at handling it, at helping you find just the right relevant information within that data. We have grown both as unstructured data itself has grown, but also as the use cases for unstructured data have grown. The most exciting one of all of them, obviously, is AI. The role that we play is search for AI, specifically as people do context engineering or providing relevant context, accurate context to a large language model so it can actually do its job, whether it’s doing it for some sort of agentic workflow or some conversational app that you might be building.

An LLM itself does not know anything about your private proprietary information. That’s where Elastic comes in. We are one of the most widely used vector databases, but we do so much more than that. It’s all about providing that right context. The vision for me is that Elastic becomes the platform, the data platform for data retrieval and context engineering as people build AI apps, and that we are baked in into this new AI stack that’s emerging across enterprises, mid-market, government agencies worldwide.

Cash, Interviewer: Got it. I was going to title the session "Cash Asks Ash." That should be at the front end, whoever’s going to—Cash Asks Ash, or Cash versus Ash, or whatever you want to call it.

Ashutosh Kulkarni, CEO, Elastic: We will find a way to really win.

Cash, Interviewer: It’s coming. Trust me. That’s great. As we dig into AI, there’s bits and pieces of the stack that you talked about: vector databases, vector search, vector embeddings, search, the concept of the core Elastic search. Can you, I mean, I know you’re an engineer, can you put it all together? What does that ultimate AI application stack look like? Why do we need these different elements that do these things, that embedding, search, database? Where do you play in each of these layers of the stack the way you are?

Ashutosh Kulkarni, CEO, Elastic: Sure. Take any large language model. First of all, we’re agnostic to what large language model you use. We integrate with just about every single one of them. If you are building any kind of agent, the two things that you need, the first thing that you need is an LLM, because that’s the one that knows how to do reasoning on information. It knows how to do inference. It’s able to predict the next token. It’s able to create an actual set of sentences that are reasoned and thought through and come up with a full detailed answer, or it can take actions, what have you. The other thing that it needs is some context of your information, because otherwise all large language models are only capable of answering based on what they have been trained on. What they are trained on is publicly available information.

When you’re dealing with an LLM in the context of your business, it has no understanding about your inventory, your parts, your products, your customer tickets, like what have you, your policies, your internal knowledge bases. It has no context of any of that. The only way to provide it with that information in real time, because by the way, all of your data is also constantly changing. You need to connect the data to the language model. That connection needs to be done in real time. It needs to be done with the minimum number of documents to pass to that language model, because the more information you pass it, the higher the chance that the language model is going to hallucinate. You want to keep that set down.

Cash, Interviewer: I would have thought that it should lead to less hallucination the more information it has, right?

Ashutosh Kulkarni, CEO, Elastic: Not really, because the more information you provide it in a narrow context, it can often go, OK, I have various choices without recognizing that you know internally, because of various relationships that you might know about the customer, the segments that they’re in, etc., that although these five documents are all related to the general topic, only these three documents are related to that customer. If you don’t provide that bit of information, the language model might assume that all the documents you’re giving it are equally valuable. This is the point of training and RAG, like the RAG, or retrieval augmented generation. The RAG of three years ago is no longer the same RAG that we see today. Retrieval augmented generation has become very, very sophisticated now. People take into account things like known relationships. People will often model it in the form of a graph.

People take into account what you know about preferences and biases. People take into account what you might have to do in terms of filtering the set based on other parameters, like preferences, like geolocation, like other choices that customers might have given upwards to. There is this whole process of learning to rank or re-ranking that has become very, very powerful and well understood. This is what I mean by context engineering. Context engineering is more than just a vector database. It is about first organizing and chunking the data that you are working on correctly. Then it’s about using the right kind of embedding models to turn it into vectors. Then it’s the vector search process itself with all the additional facets and search facets that you can apply on top of it.

There is hybrid search, which might be, OK, I’ve got an answer using vector search, but I also want to look at what text search gives me, especially if the data is textual, and then re-ranking all of that based on this learn-to-rank technique that I talked about. Eventually, what comes out after all of that is ideally the most accurate bits of information that you pass to the language model. The language model is very much going to get the right answer, right? That process is involved. If you give the wrong answer really, really fast, it’s not very helpful. Accuracy of that context is what customers care about most. That is really the evolution that we have seen in the market, where customers are getting more and more sophisticated about this. All of this plays to our strengths. This evolving stack also has additional things to it.

For example, you are already seeing people talk about Elastic LLM Observability. You are already seeing people talk about synapse when it comes to understanding the number of tokens that are sent back and forth to these large language models, because that also runs up cost. You are seeing people talk about LLM security. The whole AI stack is going to continue to evolve. Our role is in the center of that retrieval and context engineering bucket. That is where we intend to have our greatest focus. Eventually, we will expand from there. If we capture that core ground, I think the opportunity for Elastic is massive.

Cash, Interviewer: On that note, I know your most recent quarter, and we’ll bring Navam into the discussion shortly. You talked about a few use cases. What is the best use case, the most impactful use case where this whole stack, the embeddings, coupled with the vector search, vector databases, and the context engineering, is having the maximum effect?

Ashutosh Kulkarni, CEO, Elastic: There are so many examples that I think are super fascinating. We’ve got a government agency that is using us for solving human trafficking use cases, where they marry phone call information, so audio information, with CCTV information that they look across using these kinds of vector search techniques to quickly figure out where a person of interest could have gone. Amazing kinds of use cases that are helping address real human problems, all the way to AI music companies that use us under the covers and as a vector database.

Cash, Interviewer: What are they doing? I mean, detecting illegal use of proprietary music?

Ashutosh Kulkarni, CEO, Elastic: I’m not going to go into the proprietary versus not. They use us as a vector database under the covers. As you are searching for music fragments and trying to create your own composition, Elastic is the technology under the covers. We are used by some DevSecOps platforms as the vector database under the cover for their code generation agents. All the way from these kinds of AI-native capabilities to more traditional use cases, we have banks that have used us and have made us the, have sort of put us into their core agent development framework and have built agents on top of us already for servicing their high-net-worth clients as a conversational chat application that is used by their wealth management teams, all the way to automotive companies that have built agents for dealing with their partner networks. It’s very, very broad.

Customer support, code generation, like some of these AI-native use cases, like I talked about, ISVs that have embedded us under the covers as they are building AI applications that they are taking to market. You know, what’s exciting is all of this is still just scratching the surface, in my opinion, because most organizations today have a handful of these AI applications that they’ve rolled out. The aspirations that are very clear are to build hundreds of these for all kinds of automation across the industry. I think as each customer adds more and more of these AI applications, our consumption just naturally grows. Our focus today is to get embedded into as many of these use cases as possible, which is why we give the count on 2,200 customers in the last quarter that are using a Cisco Elastic Cloud. These are not trials.

This is not, you know, we have free trials and all of that, but these are actual customers and actual use cases.

Cash, Interviewer: Got it. I want to come back to you later on about how you participate in the economics of the AI stack. Navam, if I recall right, you’re an engineer, right? A former engineer.

Ashutosh Kulkarni, CEO, Elastic: Before. Recovering.

Cash, Interviewer: OK, yeah.

Ashutosh Kulkarni, CEO, Elastic: I’ve long been a finance person that I’ve forgotten.

Cash, Interviewer: I understand. I’m an engineer. I used to be an engineer, but I’m a finance guy too. The reason I asked is, was that the reason you hired Navam?

Ashutosh Kulkarni, CEO, Elastic: Yes.

Cash, Interviewer: You’re a bar of slow hyperscaler. Got to be an engineer first and foremost, and then the finance stuff, the MBA, and all that.

Ashutosh Kulkarni, CEO, Elastic: I’ll tell you, like, one interesting.

Cash, Interviewer: No, seriously.

Ashutosh Kulkarni, CEO, Elastic: I know. I’m not going to answer it seriously as a question. One of the things that to me was really, really important is the ability to understand what it means to operate in an open-source model, right? Navam brought a ton of that expertise with his time at HashiCorp. The engineering piece might have been somewhere in the list, but it was definitely not one of the top reasons.

Cash, Interviewer: That’s great. Navam, welcome to our first.

Ashutosh Kulkarni, CEO, Elastic: Great to be here.

Cash, Interviewer: Should we call it a podcast? I mean, I think it’s better than a podcast. We’re asking some real questions of real people. How has it been so far for you at Elastic? How’s been your experience?

Ashutosh Kulkarni, CEO, Elastic: It’s been great. I mean, as Ashutosh mentioned, you know, I’ve been in an open-source company before. The way I would describe Elastic is there’s a lot of things that rhyme with my previous experience at HashiCorp, and there are a lot of things that are way better than my previous experience at HashiCorp in the scale that we’ve achieved, the amount of GTM success we’ve seen over the past four quarters, and the testament we have with sales-led subscription revenue having such a durable value over four quarters. More importantly, the ton of product innovation we’re delivering into our platform, and also the tailwinds of AI. There are a lot of things to get very excited about in this particular opportunity at Elastic. It’s been a really good time. I’ve lost count because I can’t play the new guy card anymore, but I think it’s been about two quarters.

It’s been a great time.

Cash, Interviewer: This quarter that you reported was your first full quarter?

Ashutosh Kulkarni, CEO, Elastic: This was my first full quarter of actual results.

Cash, Interviewer: Seeing the pre-acceleration, margin expansion. The guy’s crushing it. How were you able to do this? What is your prognostication if you uncovered so much in four months?

Ashutosh Kulkarni, CEO, Elastic: I take very little credit for this last quarter. I think there’s been a ton of work that the team’s been doing over the past more than a year on the GTM side, on the R&D side, that I get the benefit of just coming in in the last quarter and saying, hey, guys, you know, here’s how we did in this last quarter, and it was a great quarter.

Cash, Interviewer: He’s being very humble. I mean, yeah. Talk to us more about the price increase thing. People debate this, oh, yeah, all the growth has price increased. Most of it was not much. What is the best? Personally, I like it when a software company increases prices.

Ashutosh Kulkarni, CEO, Elastic: Yeah.

Cash, Interviewer: Some look at it and say that means the units are down, blah, blah, blah. You want to invest in a company that has pricing power and is able to show that pricing power by raising prices, common shared with the value that the company delivers. Where are people wrong about, if you got the sense, I know you’ve been at a couple of other conferences before, are people being a little wrong-headed about assessing the quality of your price increase and the durability of your growth?

Ashutosh Kulkarni, CEO, Elastic: I think there’s a few fundamental misunderstandings. The least misunderstanding is the misunderstanding of how consumption works and how price increases work in the context of consumption, right? First and foremost, I would argue that most of our peers who are here at your conference today, particularly the innovative ones, have most likely revisited prices from time to time. We’re no different, right? Some of them reflect their innovation through new SKUs that they introduce, and they charge for those SKUs. The way we do it is we drive a lot of innovation onto a single platform, and that platform increases in value over time. From time to time, we look at that, and we reflect some of that value through a price change. We’ve done that in the past. Last year, we did it on the self-managed side.

A couple of years before that, we did it on cloud and self-managed. This last May, we did it on cloud and self-managed. This is not likely the last time we’re going to revisit prices, given the amount of things we’re doing with the platform to actually give value to our customers. The underlying platform is getting more and more valuable. The thing we need to remember is how do we judge success in this context? We judge it in two ways. How are our customers committing to us on a quarter-by-quarter basis, and they have a choice? How are our customers increasing or decreasing consumption with us? That’s a day-by-day they have a choice on how they do that, right? When you think about the puts and takes of consumption, there are multiple things that are happening under the covers.

First is the data volumes increase, and that increases the consumption the customer has. Price is one factor which increases the consumption a customer has. On the other side, there are things that decrease the consumption. Our customers optimize quite frequently. As they optimize, they increase data. They optimize, they increase data. That kind of tends on an upward and downward trajectory. The second is we introduce things into our platform that is meant to make things more efficient to our customers. About a quarter before, if I’m not mistaken, or a quarter or two before the price increase, we introduced Elasticsearch LogsDB index mode, which was meant to increase the efficiency of how our customers store their data and therefore consume less. That’s by design.

Searchable Snapshots, which happened a couple of years ago, meant to decrease the amount of consumption a customer has to make things more efficient to our customers. In the consumption model, there are multiple puts and takes, of which pricing is just one. Pricing is elastic, so customers can optimize and decrease their consumption, or they can see the value in the platform and grow. What matters is net of all these puts and takes, how is consumption going with our end customers? Q1 was a testament where consumption was strong. Commitments were strong, and more importantly, consumption was strong, which gives us confidence that what we are delivering to our end customers was ultimately absorbed, and they decided to increase their consumption with us.

Cash, Interviewer: Got it. In the context of being able to successfully put through a price increase and the quality of the Q1 beat, people see the rest of the SQL 26 guidance as being very conservative. How do you, as a CFO, balance the process of setting prudent expectations versus signaling confidence, which is always a tricky thing to do?

Ashutosh Kulkarni, CEO, Elastic: Yeah, I think, you know, on the one hand, we had an excellent quarter. It was a very strong quarter, both as a consumption and a commitment basis. It was balanced without any outliers or one-time things that caused us to see this increase. The way you should think about the price increase, like I said, it was a durable lift to the floor, just like you adopt a new SKU, and then you grow from there. That’s how you should think about how the price increase changes over the year. To answer your question, we gave a prudent guide in Q1. We detailed the assumptions behind the prudent guide, sorry, in Q4. In Q1, we delivered against that guide. I think the meta point is that the results speak for themselves as to the confidence, right? The numbers will speak to the confidence.

We will continue to give prudent guides. One thing to remember is we also give a lot of narrative behind those guidance, behind the guidance numbers, to talk about the underlying strength of the business. The Q1 details we provided was, in our opinion, a very strong quarter, which gives us confidence in the full year, which made us raise the full-year guide. We intend to execute every quarter and revisit the year as appropriate.

Cash, Interviewer: Yeah, I mean, just to be very blunt about it, the two things I look at in every quarter are how are commitments trending, because commitments are a predictor of future revenue, as you know. The second is what is the trend line on consumption. They were both very, very strong in Q1, and the underlying business is very strong. You know the prudence that Navam Welihinda bakes into the guide, I think that’s one thing. To me, as I think about the business, there’s a lot of excitement. You’re better positioned today relative to a quarter ago, two quarters ago, three quarters ago, after you went through your GTM changes.

Ashutosh Kulkarni, CEO, Elastic: Yeah, you think about, like, you know, when we had the issue over a year ago, five quarters ago, you know, we talked about where we had to stumble, right? We’ve always been very transparent about stuff like that.

Cash, Interviewer: How are things looking up now?

Ashutosh Kulkarni, CEO, Elastic: Oh, it’s been great. I mean, the last four quarters have been very solid sales execution. You know, the changes that.

Cash, Interviewer: You’ve tweaked, and you’ve got the right model.

Ashutosh Kulkarni, CEO, Elastic: We’ve got the right model. The two things that Mark, our CRO, really wanted to get right, one was the focus on enterprise and mid-market, where each rep had fewer accounts, because the model that we had prior to him making that change had been there since pre-IPO. It was right for when we were a much smaller company. We had gotten to the point where, like, we needed to do something different to allow us to go deeper and broader in accounts. The second thing that we wanted to get right was greenfield territories where we could have a dedicated hunting motion, because that is also important. We have a massive open-source presence out there. Customers or organizations that are using the free version of Elastic Search but have never paid us, our prospects are great prospects.

How do we make sure that we have a dedicated greenfield motion, hunting motion? Both of those, we established with that change that we made. As those settled, we are starting to see the benefits. We are starting to see larger $1 million customer accounts have grown faster in the last year than prior years. We’re seeing the right kind of outcomes. It just makes me feel very good about the future.

Cash, Interviewer: That’s great. Yeah, my go-to-market is actually the opposite. I’ve started with 12 companies. I’m growing 37 companies. If the IPO markets continue to be healthy, that number will likely grow. Everything has a limit. I want to talk to you, and I want to come back to you. How are you deploying AI internally within the company as a CFO with an engineering background? Aren’t you, like, jumping all over this and seeing how you can, whether it’s operating efficiencies within finance or sales, how are you deploying this stuff and getting advantage out of it internally? Then we’ll come back to you externally.

Ashutosh Kulkarni, CEO, Elastic: I think in the broader business, there’s a lot of AI deployments, which include in customer success, in the marketing organizations. In those organizations, we even have an internal agent that we talk to and get intelligence from.

Cash, Interviewer: What do you call it?

Ashutosh Kulkarni, CEO, Elastic: It’s called Elastic GPT. I mean, we’re very, very creative.

Cash, Interviewer: For sales, it’s called Elastic GPT.

Ashutosh Kulkarni, CEO, Elastic: We’re very creative. I think finance is in the very, I would say, behind the rest of the company in our adoption for AI. There are several reasons for it. It has to do with the maturity of the audit side and the acceptance of AI in audit. Naturally, I think the first places where we’d be using the AI and ML is in the FP&A side, and accounting will be a little bit behind the FP&A side.

Cash, Interviewer: How do you foresee using AI in something like accounting? How does that?

Ashutosh Kulkarni, CEO, Elastic: I think the first thing that needs to change is that the audit firms themselves will have to accept AI as a form of something auditable. Right now, the problem is that the audit firms are not quite there yet in accepting that. Once that acceptance comes, there’s a ton of things that we can do, both in preparation of memos or reconciliations that could be done with AI that just make our accountants way more efficient than what they are today.

Cash, Interviewer: On the support side, as an example, our support agent is very heavily used. The number of the case deflection load is somewhere in the 40% range, where that % of tickets never gets to any human being. They just get deflected. That’s a huge, huge advantage. How long have you been doing it in customer support?

Ashutosh Kulkarni, CEO, Elastic: Over a year, well over a year, like a year and a half.

Cash, Interviewer: I thought it was going to go higher, I think.

Ashutosh Kulkarni, CEO, Elastic: Sorry?

Cash, Interviewer: It’s going to go higher. It has to go higher.

Ashutosh Kulkarni, CEO, Elastic: Yeah, I mean, we keep pushing the envelope on that, right? The way I think about it is, as we grow, if we can help make the broader teams more and more efficient, that just means more that we can drop to the bottom line.

Cash, Interviewer: Development.

Ashutosh Kulkarni, CEO, Elastic: We use, I mean, we are a huge GitHub shop. As anybody who knows Elastic and Elasticsearch, you know our public repos are all in GitHub. We are a very large GitHub shop, and we use their Copilot, but we also use multiple different web coding tools.

Cash, Interviewer: What is your view on web coding versus Cursor versus what you get with GitHub, the core Codex-based technology?

Ashutosh Kulkarni, CEO, Elastic: We use multiple of those because we found different advantages for different ones. We use two distinct web coding tools in addition to GitHub Copilot. What we have found is that the places where we are seeing the greatest advantages are in test development and in UI development. Anything beyond that, even though you would argue that our repository is all in the open, so arguably the language models have been trained on our entire source code, it’s still, like, we don’t.

Cash, Interviewer: You mean web coding or regular AI-produced code?

Ashutosh Kulkarni, CEO, Elastic: When you look at AI-produced code, it’s not where we would want to use it for, like, core modules. Where we’re seeing massive value is, in the past, we used to go into customer accounts where they would say, we love your platform, super scalable. It’s amazing. I need what’s called a search-endizing UI. I don’t know if anybody is, you know, there’s merchandising, right? When you’re talking about merchandising, marketers like to change preferences. When you do a search, this thing will pop up before that thing. You want to pin certain results. That kind of a UI is called a search-endizing UI. Historically, we’ve always preferred to have a platform.

Cash, Interviewer: Let’s see who’s coolest, Kash versus Ash.

Ashutosh Kulkarni, CEO, Elastic: You’re better at naming. This idea for search-endizing UI has existed in the market for a long time. We didn’t build these out of the box. This used to be a reason why somebody would say, oh, I prefer, you know, if only your UI was nice, like, you know, that company there that only specializes in that. We would basically say, look, we can help you with that, but that’s not where we are putting our product energy. What we’ve discovered is when it comes to building those kinds of UIs, these web coding tools are fantastic. We are able to very quickly churn out the appropriate search-endizing UIs and make it bespoke customer by customer. It’s like, oh, yeah, you want a search-endizing UI? Let me create it for you. That issue has gone away.

Another area where we found a lot of value is conversion from language scripts in competitive products that we might be displacing to Elastic scripts, right? You have a scripting language, and we have a scripting language.

You can just use these tools, and they’ll do the conversion in minutes. This used to be, like, months of work. Now it can get done within a week with testing and acceptance testing and everything. Huge savings. In migrations, in these kinds of areas, we’ve seen a lot of benefits.

Cash, Interviewer: Got it. Anybody wants to jump in with a question? Just raise your hand. I know it is the afternoon effect. It’s like pushing 3:00 P.M., and we all need a cup of coffee before we can rejuvenate ourselves. I have another one for you, Ash. What have you learned from the early deployments? It’s about a couple of thousand folks that use retrieval augmented generation in your install base. I keep hearing, maybe it’s biased talk in the media, that the context window is bigger for these latest models, whether it’s GPT-5, whatnot, and that people are decrying, oh, retrieval augmented generation is dead. I’m sure it’s not. How do you think about broader context windows versus the value added out of retrieval augmented generation, which is essential for you to add value to your customers?

Ashutosh Kulkarni, CEO, Elastic: I think the part that people miss on these context windows, fundamentally, making context windows massive does not really help, because your language model is running somewhere else. You allow it to receive a billion documents in a context window. Do you know how much time it takes to move a terabyte of data? Like, physically to move a terabyte of data? You’d be waiting for minutes for anything just to ship that data out. Context window increases do not really come into any relevant context for building business applications. Where context window improvements add value is if I’m having a long-running session for ChatGPT. Now I can maintain.

Cash, Interviewer: History with somebody’s data.

Ashutosh Kulkarni, CEO, Elastic: I have a conversation. I can remember the conversation that you had with ChatGPT going back a year, because it just stored it in that context window, and it keeps building it up. There are use cases where context window increases are incredibly valuable. They are almost entirely orthogonal and irrelevant to the conversation about RAG.

Cash, Interviewer: Very clear, very clear. It reminds me that just the way you explained that one terabyte of information. Anybody here remember in-memory databases and how it was a thing? There are some people here. You will just not admit that you’re old, but I see some faces here. In-memory databases are going to wipe out because they had the context windows. It could run transactions in memory, but then we had a reawakening.

Ashutosh Kulkarni, CEO, Elastic: We realized that data is actually way more than can fit into memory. I think this is the thing, right? The fact is that RAG, or retrieval augmented generation, the way we thought about RAG two years ago was pretty naive. Now when you look at RAG, there is way more in it than what we used to think about two years ago. Having said that, RAG is never going away. Retrieval in real time is always going to be critical, because your data is constantly changing. If you want to ground the LLM in the right information, you have to do it in real time. You can’t afford to ship terabytes of data to an LLM. You have to do it contextually. You define the right thing. Accuracy is going to matter more than speeds and speeds.

When we talk to customers who are now building these modern apps, hey, look, if it’s a ChatGPT-style application and it gives you the wrong answer, you’re fine. If it’s an agent that you’re depending on to book tickets for you, you want to make sure it’s booking a ticket on an airline and a flight that actually exists, right? If it makes hallucinations and, like, gets that wrong, you’re hosed.

Cash, Interviewer: You’re transport airlines.

Ashutosh Kulkarni, CEO, Elastic: The problem of these language models is they are amazing. They’re magical. If they don’t have the right answer, they make it up.

Cash, Interviewer: Not me.

Ashutosh Kulkarni, CEO, Elastic: You got engineers don’t do that, but everybody else.

Cash, Interviewer: Yeah, it’s getting fascinating. One minute and 19 seconds we have. We’ve talked a lot about search. Give us the state of the assessment in APM, observability, security. Like, what do you think?

Ashutosh Kulkarni, CEO, Elastic: Our focus is to really continually play in those areas where unstructured data is the most important part of the problem. When it comes to observability, we lead with log analytics, and then we’ll expand from there to APM, infrastructure monitoring, et cetera. The same with security. We lead with SIEM, because it’s all unstructured log data, and then we’ll expand from there. Our AI functionality, Kash, is helping us massively differentiate in these areas. You look at our tag discovery functionality. You look at the AI SOC engine that we recently announced. All of these features are all about how we use the native AI stack to help you automate your SOC process, to help you automate your SRE process. That’s how we intend to win in those spaces. It just feels like a very consistent way. That’s why I say we are a platform, not a portfolio, right?

That platform approach gives us massive leverage that we feel will allow us to continue to both grow the top line, but also grow profitability for many years to come.

Cash, Interviewer: Got it. On that note, I wish you a successful journey in the years ahead. Thank you once again for your support. Navam, so good to see you part of the team here. Let’s give a round of applause for Ash and Navam.

Ashutosh Kulkarni, CEO, Elastic: Thank you. Thank you. Thank you.

Cash, Interviewer: Thank you.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers
© 2007-2025 - Fusion Media Limited. All Rights Reserved.