MongoDB at Goldman Sachs Conference: Strategic AI Focus

Published 09/09/2025, 17:10
MongoDB at Goldman Sachs Conference: Strategic AI Focus

On Tuesday, 09 September 2025, MongoDB (NASDAQ:MDB) presented at the Goldman Sachs Communicopia + Technology Conference 2025. The discussion highlighted the company’s strategic pivot towards enterprise-level deployments and AI applications, while also addressing the challenges and competitive dynamics in the database market.

Key Takeaways

  • MongoDB is shifting its focus to strategic, enterprise-level workloads.
  • The Atlas business is nearing $2 billion in revenue, offering room for efficiency.
  • AI and machine learning are expected to drive future growth.
  • MongoDB aims to expand its market share from 2% to 5%.
  • The company is investing in R&D and product development to support AI applications.

Financial Results

  • The Atlas business is approaching $2 billion in revenue.
  • Focus on both revenue growth and margin expansion.
  • Expectation of continued growth and profit to cash conversion.

Operational Updates

  • Sales strategy is shifting towards high-quality enterprise workloads.
  • Plans to increase productivity through offshoring and AI adoption.
  • Emphasis on product-led growth and developer awareness.

Future Outlook

  • Continued investment in R&D, product development, and marketing.
  • AI applications and relational migrations are seen as growth drivers.
  • An Investor Day in New York will provide more detailed guidance.
  • Enterprise workload growth is expected to continue throughout the year.

Competitive Landscape

  • MongoDB holds a 2% share in a $100 billion market, aiming to increase to 5%.
  • Differentiation from PostgreSQL through JSON support and scalability.
  • Competitors like Snowflake and Databricks are acquiring OLTP PostgreSQL companies.

Q&A Highlights

  • AI adoption is in early stages, with a focus on productivity and back-office applications.
  • Customers are cautious about AI in customer-facing applications due to risk of errors.
  • Embedding models are crucial for connecting private data with AI models.
  • Scalable platforms with governance controls are necessary for AI applications.

For further insights, readers are encouraged to refer to the full transcript.

Full transcript - Goldman Sachs Communicopia + Technology Conference 2025:

Kash, Host: It’s been an absolute delight to all the—I didn’t work with you every year during your journey, but I think I’ve overlapped with you during the most momentous aspects of your career. Congratulations. Here we are, and here’s to the road ahead with MongoDB. Mike Berry, the CFO of the company, who is, I think, new to the Goldman Sachs conference as CFO of MongoDB, right?

Mike Berry, CFO, MongoDB: Yes. Been here many times, first time with MongoDB.

Kash, Host: OK, excellent. Of course, the one and only Matt Martino, MongoDB. Matt’s telling me that this is one of my favorite coolest companies. I’m just so excited by being on stage. With that introduction out of the way, Dev, congrats. What a journey. I mean, that quarter was incredible. What’s ahead for MongoDB? I keep asking you the same question. I think this is the fourth year in a row we’re doing this together. What is ahead and what has changed in your assessment as to where MongoDB is going in the next four to five years? What’s the landing point?

Dev, MongoDB: Yeah, obviously, we can double-click in many facets of the business. The first point I’d like to make is we’re going after a very large market. This is not a winner-take-all market. We only have, if you think the market’s roughly about $100 billion, we only have 2% share. If we increase that share just to 5%, we’re a $5 billion revenue company. We have a massive TAM opportunity, and we don’t need to make any kind of weird pivots or anything to go after that TAM. I think our core business is growing well. Our quarter that we announced a couple of weeks ago was not driven by any AI cohort or some AI outliers. Customers are using MongoDB to run and transform their business day to day. The third thing I would say is that we do feel like we’re well positioned for the coming AI wave.

I’ve said on the call and at other conferences that we still see being quite early in the enterprise segment in terms of adoption of AI. Most of the adoption is either through third-party ISVs, focused predominantly on end-user productivity, whether it’s developers with code gen tools or business users with office productivity kind of tooling. That being said, when you think about it, and this may be where you want to take the conversation, we feel like we have the right architecture for what people need to do to build sophisticated, transformative AI applications or agentic applications, if you want to call them that, that will really transform the business. We can get into what role the database layer plays in those applications.

Kash, Host: Yeah, that was going to be my—you read my mind. That was going to be the next question. The past 10 years have been all about pivoting to the cloud, large-scale transaction systems that people thought NoSQL databases were not the best optimized for. You got over that, and we were asset compliant and transactionally every bit as fault tolerant, scalable as any other prehistoric relational database. The next chapter, what do you think about the opportunities and the risks presented by AI? Not every company—you and I have been through a couple of tech cycles and transitions. People from the old cycle, it’s unfair to say they don’t make it, but they kind of make it at different points in time. Oracle, SAP, Microsoft, they all got on the cloud back at different points in time. Cloud bandwagon, I said that too quickly.

How do you see MongoDB poised for the AI cycle, opportunities and risks-wise?

Dev, MongoDB: Yeah, you mentioned the term NoSQL. That’s a term I’ve come to actually abhor and one less—I used to like it because it contrasted us against relational databases. The challenge is that everyone just buckets all NoSQL vendors into one bucket, which is a kind of very superficial view of the market. We are the only modern database that provides full transaction support. We can support strongly consistent use cases, like transaction-intensive use cases, like for a financial application, trading application, billing application. We also support eventually consistent use cases, like time series applications, where you don’t necessarily care about any individual data point. You’re more trying to understand the trends of data over time, and you want to be able to collect and process that information very, very quickly. Most people think of NoSQL as eventually consistent.

They don’t realize that we can really serve the needs of the most demanding transaction-intensive use cases. In terms of your question about AI, when you think about what is the role of the database in the AI era, I would argue that one of the key roles we will play is managing state and also managing memory. If you think about what’s going on in your system, think about like if you think your LLM is your brain, your brain needs to get feedback on what’s happening to its body. If you touch something, if you touch a hot plate, you need to know, OK, I need to remove my hand because I’m going to burn my hand if I don’t remove it. You need some feedback mechanism. If you’re feeling tired, you need to sit down. If you’re getting hot, you need to sweat.

Similarly, modern applications need some mechanism to understand what is going on. The place where you have real-time information about what’s going on in your business is your operational data store or your OLTP platform. That’s where you really understand what’s going in the business. You can react and reason about what’s going on and maybe act on things or change things. Essentially, you need to know the state of things. Then you need to decide, OK, what do I need to do based on this state to drive certain outcomes? You obviously need to plan, and then you need to invoke actions. Being able to—an LLM may tell you to do something, but it still has to invoke an action. Typically, that’s done through some software layer built on top of an OLTP platform.

We believe that we’re really going to play an important role in these new agentic use cases. If you ask yourself, what does a modern database for the AI world look like? One, I would argue, you need to support JSON. JSON is the best way to model the messiness, the complicated nature, the evolving nature, the hierarchical nature, the interdependent nature of data. You can’t superimpose that on a tabular structure. Two, you need to have sophisticated techniques on finding and retrieving information very, very quickly. We not only support traditional queries, we have a lexical search engine. We have a semantic search engine with a vector store. We also now have embedding models, which are ranked best in the industry. That’s a quality of signal because embedding models are the bridge between your private data and the LLM.

What people start to realize is the way I reason about my private data is very important to get the quality of the outputs I want. I use a simple example. The word pitch can have so many different connotations. It could be a baseball pitch. It could be a soccer field, which is called a pitch. It could be an investor pitch. It could be the pitch of a plane. It could be the pitch of a roof. It could be the pitch of your voice. If you don’t understand.

Kash, Host: It could be the pitch on a cricket field.

Dev, MongoDB: I’m sorry?

Kash, Host: The pitch of a cricket field.

Dev, MongoDB: Yeah, exactly. If you don’t understand the context and meaning of your data, then how the LLMs reason about that data becomes much more challenging. The quality of the embedding models has a huge effect in terms of how LLMs can reason about your private data. When you marry that, on top of that, if you think about agents, agents don’t go home for dinner. They don’t take vacations. They don’t take lunch off. The intensity of usage with agents will require a massively scalable platform, and you need a distributed architecture to support that environment. When you start breaking down what the future database of the AI era looks like, it starts looking awfully like MongoDB.

Kash, Host: Is it just luck or thoughtful planning that the database architecture had evolved in such a way that you could take on AI? Generationally speaking, databases that made it in the client server on-prem era were obscure. You had hyperscalers with their databases, and you had your database. Is it just good fortune or some thoughtfulness?

Dev, MongoDB: Yeah, I mean, I obviously give the credit to the founders. You have to remember, the founders had built one of the first web scale applications. They had started a company called DoubleClick. That was one of the first web scale applications. Obviously, even today, Google uses the DoubleClick technology to drive billions of dollars of ad revenue. What they saw was clearly that in the early 2000s, they saw the inherent challenges of trying to manage massive amounts of data, much of it also unstructured data, to be able to on a tabular architecture. Because of that, they said, rather than trying to constantly jerry-rig, and they’re very talented, they could constantly work around the constraints, they said, I’m tired of working around these constraints. I want to build something that I would want to use that is a much more natural and intuitive way.

JSON is another way of saying a document database. The document database, we believe, is the best way to work with and organize data. You could argue they were prescient or maybe a little lucky, but we’re happy at the outcome.

Kash, Host: These are things we don’t pick up on an earnings call. I mean, people don’t ask these kinds of questions. That’s why these firesides, I hope they happen more than once a year. We got once a year, and thank you for that. Matt wants to put Mike on the spot.

Matt Martino, MongoDB: Yeah, sure.

Kash, Host: Mike, welcome to Communicopia. You came from NetApp, where you had a lot of success driving leverage in the business. You have two quarters under your belt at MongoDB. When you look at the business today, where do you see the most opportunities to drive efficiency?

Mike Berry, CFO, MongoDB: Sure. Kash, Matt, thanks for having us. When I started a couple of months ago, I felt this way. I would tell you I feel even more so now. When you look at MongoDB, it all starts with the business model. The fact that we generate such great revenue growth, and then that cascades down to gross profit, we have a ton of opportunity to invest. As we look going forward, it’s really three areas. One is we’ve largely built the infrastructure for the company. We’re everywhere we need to be from a sales and marketing perspective. We have direct sales. We have sales engineers. We have partners. We’re everywhere in the world. You don’t have this step function. Most of the new investment will be incremental to drive growth. The second piece is, Atlas is almost a $2 billion business now.

The scale affords us a lot of flexibility to drive efficiencies through all the rest of the groups. Number three, where all the fun starts, is productivity. We have, what, 5,500 employees. We already have the base that we need to grow to drive the growth that we expect in the future. Now it’s all about, OK, driving productivity. We haven’t had any benefits from AI, for instance, in the company. We need to push a lot more offshoring. That productivity piece will be where we’ll focus across all the different teams. We feel very good about being able to drive growth not only in revenue, but then the flywheel cascades down the margin. You’ve seen that in our new guidance. We’ll talk about it a lot next week.

You’ll hear us say, come see us next week at Investor Day in terms of our confidence in being able to drive durable growth and margin expansion.

Kash, Host: Mike, a lot of companies at the conference have been talking about AI productivity gains in terms of driving efficiencies in their cost base. Where do you see the most opportunity from an AI perspective to drive efficiency?

Mike Berry, CFO, MongoDB: Yeah, it’s a great question. Dave talks about this a lot. I think what you’ve seen so far is companies focus largely on customer support and then encoding, where there’s been real, not, I would say, material, but real advantages with AI. I talk to a lot of the CFOs, and you can ask them here, Matt. I don’t think, for instance, there’s any killer use case yet with AI in finance, but it’s coming. We focus a lot not only on productivity, but also machine learning and AI for our forecasting, especially in a consumption business. The ability to take all that external data and build it into your forecasting is super important. There’s been all the RPA and everything else that’s happened. I do think that there’ll be benefits in AI, but it’s coming over the next couple of years.

Kash, Host: That’s great. I want to touch on the Atlas acceleration you guys have seen over the last two quarters. What are the structural drivers behind the reacceleration, and how sustainable is that trajectory over the medium term?

Mike Berry, CFO, MongoDB: Yeah, great question. We do think it’s sustainable. You saw that in our new guidance. I would focus on three areas as it relates to Atlas. The first thing is our move-up market. In the past, we had asked the sales team to focus a lot more on the quantity of workloads versus the quality. What we did six, nine months ago is we asked them to focus more on enterprise. Our wonderful self-service, which we’ll talk about again next week, can really fill in in that lower mid-market. We’ve asked them to focus more on the quality of those workloads and the size, and also tweak the comp plans a little bit to say, hey, go get more ARR. That’s what we all want versus the quantity. That’s number one. The push-up market has helped as well.

We also saw strength in some of our larger, older customers, where we saw some of those workloads grow for longer than we had seen in the past. While there’s not a perfect correlation, we do think that that’s driven a lot of it. From a sustainability perspective, we do expect that to continue to grow for the rest of the year. You saw that in our new guidance when we upped the numbers.

Kash, Host: Dave, this higher for longer enterprise workload growth that you’re observing, what’s driving that? Is that the mission criticality of the workloads you’re now landing in really the last 12 months with respect to some of these older customer cohorts?

Dev, MongoDB: Yeah, to touch on, just double-click on what Mike just said, originally, our original thesis was that let’s encourage our reps to acquire as many workloads as possible because it’s truly not easy to understand which workloads are going to grow, become the biggest, or grow the longest. We assumed a portfolio theory and said the more workloads you acquire, you have a better chance of finding those, let’s call it, mega workloads or just a cohort of really high-growth workloads. With the benefit of hindsight, we realized that because we are indexing so much on volume, our reps are focusing on more tactical workloads where they could quickly close them versus the more strategic workloads that required more selling, more engagement, more technical kind of deliberation.

Consequently, we were kind of skimming off the top of the workloads versus going after really some of the more crown jewel workloads, for lack of a better kind of framing. When we made that comp plan change, combined with the move-up market, one, we always saw the highest productivity of our reps at the high end of the market. Two, when we made those tweaks to the comp plan, we have definitely seen a much better focus on closing more strategic workloads, which is, I think, driving the growth. Obviously, the workloads you close in Q2 have a diminishment impact in the current quarter. We hope that we’ll see similar kind of behaviors two to four to six quarters from now.

Kash, Host: Great. Dave, I want to move back to the AI piece just for one question here. You talked about the advantages of having vector search, traditional search, as well as the vector embedding models. I think when we look at the landscape, where you guys are a bit differentiated is the vector embedding models. Can you talk about how advantageous that can be in terms of field execution, going out there and landing new workloads? You now have 8,000 AI startups on the.

Dev, MongoDB: Yeah, I’ll give you some story. I met with the CIO of a very large health care company. As you can imagine, health care data is very proprietary, but it also has all these nuances in terms of nomenclature, acronyms, syntax, and so on and so forth. For an LLM to reason about all that data becomes very challenging. One of the things that they started talking to us about is potentially building a custom embedding model just for their business because by definition, that would give them a higher quality signal about their private data that the LLMs could reason about. No enterprise is ever going to give OpenAI or Anthropic all their private data. The embedding models are essentially the bridge between your private data and the LLM. The quality of the embedding model has a direct correlation to the quality of the output.

It’s that example I use again with the word pitch. There are so many nuances. Context is very important. The other advantage we have is that we can combine lexical search or keyword search along with semantic search. The sophistication of the queries you can do, it’s all about finding the right information. I’ll give you a simple example. If you hired Albert Einstein as your intern and said, hey, I want you to do research on this hot company that I just found out about, Albert still needs to do some research. He’s not just going to go through osmosis, know anything about this company. He could read every book. He’d go to the library and read every book on every company in the world. That would not be very efficient.

What an embedding model says is go to this section of the library, go to this shelf, go to this row, go to this book, and in this chapter and this page is the exact information you need to reason about this company.

Kash, Host: That’s the best definition of an embedding model that I’ve heard so far. I’m going to use it.

Dev, MongoDB: The point is that you want to find an effective way for Einstein to basically find the right information to then reason and decide what to do with that information, make a recommendation, whether it’s to buy, sell, or do something else. The embedding model is just think of it as a way to have extremely high fidelity on your private data so that LLMs can quickly find and retrieve the right information to make the best decision possible. The ability for us to do that all in one platform, one unified developer interface, all the data stays in one place. All the data can be backed up in one place. You don’t have to stitch together multiple things.

A lot of people have compared us to PostgreSQL, but actually, that’s a false comparison because really, PostgreSQL versus Pinecone, who, by the way, was first trying to sell themselves, just got a leadership change, then like Elastic, and then like an embedding model from like Cohere or someone else, stitching all that together is very painful for people. The benefit we have is that out of the box, you get all that with MongoDB.

Kash, Host: You touched on Postgres a little bit. Competition with Postgres, the hyperscalers, it’s nothing new to MongoDB, but it comes up quite a bit. What are the most common misconceptions about MongoDB, and what do you believe are the platform’s enduring architectural advantages relative to some of these?

Dev, MongoDB: Yeah, I think there’s many in this room. When our growth starts slowing down, many in this room made the causal connection that, hey, PostgreSQL must be taking more share because it’s ending up being somewhat like a two-horse race. It’s a niche databases, but it’s really us and PostgreSQL. I’d make a couple of points. One, I think while we obviously have been dealing with competition, as you outlined, since day one, we think a lot of our slowing growth was our own execution, which hopefully we’ve not declared victory too early, but we feel like we’ve made a lot of progress again. Two is that I think it’s interesting to note that PostgreSQL, which is built, just so everyone understands, PostgreSQL is derived from the name Post Ingress. It’s built on an old technology that obviously people are trying to continue to improve upon.

What’s interesting is that PostgreSQL now supports JSONB. A lot of the objections said, is PostgreSQL good enough that maybe they don’t need something like a, quote unquote, Ferrari-like MongoDB? When you really dig under the covers, JSONB is a very rudimentary support of PostgreSQL. Any document over 2 kilobytes in size starts creating a performance overhead. What PostgreSQL has to do is call off-row storage. There’s a term called TOS, the oversized attribute storage technique, where PostgreSQL has to go through to process these JSON blobs. The reason it supports JSON is because it’s a tacit admission that you cannot superimpose this very ordered tabular architecture on a messy, complicated world that has multiple modalities of data. It just doesn’t make sense. That’s why PostgreSQL supports JSON. The second problem PostgreSQL has is that the data model is very brittle. It’s very, very hard to make changes.

In a world that’s only escalating in terms of velocity, people responding to new opportunities and new threats, building new capabilities, et cetera, you need a platform that enables lots of change very, very quickly. It’s very easy to make changes on MongoDB. The third thing is that PostgreSQL was designed to be a single-node system. You hear all these people saying they’re working on re-architecting PostgreSQL to make it more scalable. My engineers call it SOC-less when it comes to scaling. Essentially, we are built on a distributed data architecture from day one so that the most basic configuration of MongoDB is what’s called a three-node replica set. It means you have three copies of your data. Should there be any network systems failure, your application is always up and running. Architecturally, we believe that we are well positioned. That being said, PostgreSQL does not need to die.

If you have a traditional use case, the data model doesn’t change, and it’s very alphanumeric information, do you need to run MongoDB? No, you don’t. We have a lot of customers who do that. You have those kinds of use cases, but it’s not like the world’s going to end if you don’t use MongoDB. The market’s very big. PostgreSQL does not need to die for us to win. Obviously, we think that even just a couple of points of share could be very transformative for us.

Kash, Host: I love the way this conversation is going because we’re getting into the details of it. At a point in time when we’re between two cycles, the cloud cycle and the next AI cycle, these kinds of questions and the discussion, the depth of discussion we’re having, we barely got to three or four questions here. That’s super important if you are to paint the case for a durable growth company over the next seven to eight years. If you get certain things right at the front end of the cycle, the questions in the next few years will be consumption patterns, quarter to quarter, net expansion, lands. If you get this criticality at this point in time, I think it’s just a really good story. I want to be a little humorous. Maybe PostgreSQL should be called pre-JSON.

I want to come back to a point you made earlier about how this health care company, they have their own lexicon, lingo, which is a reinforcement that the value of enterprise data is very high. If I take that at face value, it would be super hard for the LLMs, foundation models, without naming anyone by name because we’re going to have a couple of the executives at the conference here, why would they be successful in SaaS?

Dev, MongoDB: I’m sorry. What are the?

Kash, Host: Why should investors believe that foundation models are going to be a slam dunk in SaaS? Because what you said, the value of the data, it’s very private. It should not be accessible to the public world outside.

Dev, MongoDB: Yeah, I try and take a first-missive approach. A common question I ask when I meet with customers is, what are you doing in AI? Invariably, it’s some end-user productivity initiative. Maybe they’re starting playing around with some agentic-based approaches, typically in the back office first. I ask, say, a financial services executive, are you implementing any AI use case that’s customer-facing or public-facing? They said, absolutely not. Why? Because of the risk of hallucinations. We’re still not comfortable that we can guarantee the quality of the outputs. God forbid some customer makes a buy or sell decision based on some recommendation from an AI-based system. That could be quite disastrous. Same with health care companies. People are still quite nervous that AI systems are probabilistic in nature, so you can’t guarantee the outputs. You see some data points.

GPT-5 was not this magical breakthrough where we’re getting closer to AGI. Dario, who I’ve spoken to a number of times, has talked about how six months ago, he said 90% of coding will be done by coding agents. I mean, Claude code is great, but 90% of the code is not being done by AI. I think we are, again, in the very, very early innings of AI adoption. What they’re doing in terms of the research breakthroughs are really impressive. I think we still need algorithmic breakthroughs to kind of get the next layer of intelligence in place. I think Alex Karp has, when I listen to what he says, I kind of align with what he says, is that think of AI as this raw material.

You need some sort of ontology architecture around it, where you need to understand entities and relationships and concepts and rules to put the scaffolding around this raw material to provide guardrails to produce the output that you can generate. I think that’s what you’re going to start seeing as people start deploying agents, is there’ll be lots of guardrails around these agentic platforms. Think about agents. You have to control what permissions do they have. I don’t want an agent to see something that an agent should not be seeing. You also have to understand governance. I don’t want one agent contradicting what another agent’s doing. I want to understand what my agent’s doing in general. Like, are they generating the outputs that I really want? That whole governance scaffolding infrastructure, we’re still in the very, very early innings.

I think that all has to come to place before you really see people really transforming the business with AI.

Kash, Host: Got it. Matt?

Matt Martino, MongoDB: Yeah, Dave. Two large analytics platforms recently acquired PostgreSQL companies. On the last call, you noted that this reinforces OLTP as the strategic high ground for AI. I thought that was an interesting comment.

Do you see AI shifting more of the value to database platforms like MongoDB in the future?

Dev, MongoDB: Yeah, so I want to be clear. With AI, there’s two things. There’s training, and there’s inference. OLAP technologies are great for training and data prep, and you have already the built-in permission structure of the data, so the LLMs know what data they have access to and who should see what, et cetera. That’s all great. Obviously, Snowflake and Databricks are great companies, but the fact that they had to make acquisitions in the OLTP space is acknowledgment, again, acknowledgment on their part that OLAP is not the strategic high ground for inference. To do inference, all the points I made earlier, you need to have access to real-time information. What product shipped? What is my supply chain looking like? What are the prices of X, Y, and Z goods that I may want to buy or sell? You can’t get that from an OLAP system.

You need real-time access to that system to be able to make, essentially, some decision about that. The fact that they made these acquisitions, I think, basically indicated a couple of things. I remember when Frank Slootman was running Snowflake, who I respect a lot, but he said, we have this Unistore architecture, and we’re going to come out with our next-generation OLTP platform. Obviously, the fact that they bought Crunchy Data was admission that that didn’t go anywhere. Then you had Ali also saying that he has the best data engineers in the world, and he’s going to come out with his next-generation OLTP platform.

The fact that he bought Neon, basically a vibe coding platform for hobbyists, and by the way, they had a big outage, so it’s not enterprise grade, speaks volumes about the fact that building an OLTP engine that’s battle-tested, enterprise grade, that addresses the security, the durability, the availability, and the performance requirements of a customer like Goldman Sachs or a big telco or a big industrial manufacturing company is not easy. I mean, we still consider ourselves kind of teenagers in this database market, where we’re 18 years old. We’ve gone through the knocks with nearly 60,000 customers. We’ve seen almost every use case across almost every geography, across almost every customer segment. There’s no compression algorithm for experience.

I think that speaks to the fact that we believe that we’re well positioned just from both experience and the enterprise-grade infrastructure, as well as architecturally, from the fact that we’re a native JSON database that naturally embeds lexical and vector search, as well as the embedding model.

Matt Martino, MongoDB: Dave, I want to switch to the relational opportunity. Displacing legacy relational systems has always been an attractive opportunity for MongoDB. I once heard that when the world ends, the only two things left standing will be relational databases and plastic. Can you talk to us about the MongoDB Relational Migrator tool? That’s intended to make that lift and shift a little bit easier. What are some of the advancements you’re driving through AI?

Dev, MongoDB: Yeah, we’re going to have a discussion on this next week for those of you who plan to attend our Investor Day in New York. Essentially, when I took the company public in 2017, we had called out or I asked one that 30% of our new business was relational migrations from relational databases to MongoDB, which I thought was an important data point because most people thought we were just going after newfangled use cases. Obviously, a cloud business soared, and that was predominantly new, but we still saw a lot of relational migrations. I constantly went to my engineers and said, why can’t we do more to win more of this? The constant refrain I got from my team was that, hey, remapping the schema is not that hard. Moving the data is not that hard. Rewriting the app code is hard, painful, long, and costly.

Unless a customer is under a lot of pain, no one wants to start rewriting their app. Fast forward then, obviously, eight years later, or frankly, when OpenAI announced ChatGPT, all of a sudden now, I said, wait a minute. We could potentially now use AI to refactor this code. That’s essentially what we are doing, is building a tooling platform to automate the migration process from relational to MongoDB. We’ll get into details why next week. Just to explain, why do customers care? One, there’s a ton of technical debt on these platforms. For example, if you want to AI-enable these legacy applications, like for example, I want to marry metadata to metadata is basically data about data. I want to marry metadata to this data so I can reason about what data I have so I can obviously make good decisions.

You can’t do that on a legacy platform. The data model is incredibly brittle. You have end-of-life issues. IBasis is going end of life. You have regulatory risk if you’re a financial services or health care company saying regularly you’re running your crown jewels on a platform that’s very old. You’ve got to get off these platforms. By the way, the tax of running on these platforms is very, very high. For a confluence of reasons, people are saying, I got to do something. We have a lot of demand. The obvious question is we’re trying to figure out what’s the best way for us to build. We want to take a product approach to this, not a services or like a systems integration approach. There’ll be some combination of product and services because there’s lots of variability. We’ll get into a lot of this next week.

Kash, Host: Any questions? Yes, Bijan. All right. Before the mic gets over to you, just speak loud.

Bijan: Thank you. As we move into thinking about agentic apps, one of the things that they tend to try to do is taking your digital footprint or, sorry, your physical footprint and making it more digitized. That’s how they eat into labor budgets. Naturally, that’s multimodal, like the way that we interact. As such, the complexity with these apps starts to exponentially just start to rip, for lack of a better word. Curious on your thought process around when and what use cases you really see like a more SQL approach breaking versus a multimodal having to interact with all sorts of different parts of the world. That, to me, was one of the best validations that, hey, as we look forward, you just really can’t think about the PostgreSQL and MongoDB debate as much as you did just about 90 days ago, I’d say.

Dev, MongoDB: Yeah, so I would just say answer your question three ways. Why do customers choose MongoDB over PostgreSQL? One is data model kind of flexibility. To your point, being able to handle multimodal data is so much easier in MongoDB than on a tabular architecture. Two, data model agility. I need to be able to change the data. Like the interdependencies and relationships of this data may constantly evolve. That’s going to happen a lot in AI. I need to be able to constantly adjust my schema. A lot of people think we’re schemaless. That’s not true. We have a flexible schema. We can have governance around that schema, but we can also change the schema quickly when you need to.

The third reason is being able to support these very sophisticated, what I call hybrid search techniques, where you need to be able to do both lexical and semantic search to find information very, very fast. The fourth reason is the platform scalability, being able to basically massive scale out, is the point I made. Agents don’t go back, don’t go home to sleep. Agents constantly chug away. They don’t take coffee breaks. They don’t stop for lunch. You need a platform that can scale because the intensity of.

Kash, Host: They need GPUs, more expensive.

Dev, MongoDB: They need compute. That is definitely true. The intensity of usage will be much higher when you’re replacing potentially humans with agents because, by definition, they can work harder and longer.

Kash, Host: Time for maybe one more question, and a really good one.

Matt Martino, MongoDB: Yeah, that’s a good one. Here you go.

Bijan: Oh, sorry, we already had Mike Berry.

Dev, MongoDB: Hi, guys.

Bijan: Good to see you, Dave. Maybe for Mike, just in terms of the CFO philosophy, I would have described kind of the first chapter of MongoDB’s public history from a margin perspective as very incremental, like growth first, explosive growth, and incremental margin expansion. When the company was public, non-GAAP operating margins were deeply negative. What’s your philosophy at this point? Is that kind of step incremental increase in margins how we should think about things going forward? Is there an opportunity for more of like a larger step up and GAAP operating margins getting to a more normalized level?

Mike Berry, CFO, MongoDB: Great question. I’ll answer the question, but come next week, we’ll give you a little bit more. My view of this is at the time when MongoDB was growing, and the company did a great job, it was all about growth. When you’re growing 30%, 40%, 50%, you should invest, and you should drive that growth. Now where we are with the scaled business, with a business that generates a bunch of gross profit, we can do both. The expectation is we can grow and have durable revenue at the top line, but there’s no reason why we also can’t drive margins. As I talked about a little bit before, we’re still going to invest in growth.

The things we’ve talked about, R&D, products, marketing, developer awareness, all of those things, the product-led growth, we will continue to invest, but we don’t need to invest like we’ve done in the past. I’ve been pretty clear, which is we can do both, drive sustainable, durable growth, especially in Atlas, but also be able to drive margin growth. The third piece, it wasn’t in your question, is, hey, folks, we’re a business. We need to generate cash as well. Also the conversion of profits to cash. You should expect to see continued growth. There’s no reason for us to pull back on the lever and say, hey, we don’t need to spend here. We’re just going to spend a little bit smarter, reallocate dollars, and be able to drive growth. Hopefully, you’ve seen it in what I’ve done in the past. Hey, folks, we’re going to be pretty transparent.

Here’s the goals. Here’s what we’re going to do. Here’s the drivers of the business. We’ll walk you through that in more detail next week.

Kash, Host: On that note, Dev, congrats on the great milestones at MongoDB. Databases used to be very boring when I started on the sell side, and you made it exciting. You’re the steward of the transaction database ready for the AI world. Thank you for all the great work you’re doing for the industry, for our investors. Mike, pleasure to meet you. Let me be the first to welcome you back to the 2026 Communicopia.

Dev, MongoDB: Thank you, Kash.

Congratulations on your retirement, and thank you for having us.

Kash, Host: Thank you so much.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers
© 2007-2025 - Fusion Media Limited. All Rights Reserved.