SoFi stock falls after announcing $1.5B public offering of common stock
On Tuesday, 14 October 2025, NetApp (NASDAQ:NTAP) hosted its INSIGHT 2025 Investor Tech Session, unveiling its strategic focus on AI-driven data management. The conference highlighted NetApp's advancements in handling unstructured data and simplifying AI pipelines, alongside addressing cyber resilience. While the event showcased promising innovations, challenges in data transformation for AI were also acknowledged.
Key Takeaways
- NetApp introduced the AI Data Engine (AIDE) and NetApp AFX to streamline AI pipelines.
- Enhanced Ransomware Resilience Service offers free trials to bolster cyber defenses.
- Cloud storage growth noted at nearly 50% year-over-year.
- Customer panel highlighted real-world applications by NFL and Aston Martin F1.
- Focus on expanding cloud capabilities and AI-driven threat detection.
Financial Results
- Cloud storage has been growing almost 50% year-over-year.
- Over 21,000 managed units in open source InstaCluster.
- More than 5,000 paying enterprise customers, with substantial growth.
Operational Updates
- AI Data Engine (AIDE) and NetApp AFX: New offerings to enhance AI pipeline performance and scalability.
- Enhanced Ransomware Resilience Service: Provides data breach detection and recovery options, with a free 6-month trial.
- Shift Toolkit: Enables rapid VM reformatting across hypervisors.
- Google Cloud Integration: Expanded capabilities with block storage and SnapMirror support.
- DGX SuperPOD Certification: Achieved for NetApp AFX, enhancing AI project support.
- Visual Studio Code Extension: Developer-friendly tools for AI integration.
- Post-Quantum Cryptography: Enhanced security in storage offerings.
- ARP AI Availability: Autonomous Ransomware Protection now available for various workloads.
Future Outlook
- Continued growth and adoption of AI Data Engine and NetApp AFX.
- Innovation in cyber resilience with AI-driven threat detection.
- Expansion of cloud capabilities and hyperscaler integrations.
- Support for emerging neoclouds or AI factories.
- Integration to support agentic AI with metadata fabric and knowledge graph.
Q&A Highlights
- AI Readiness: Challenges in making storage AI-ready for customers.
- Cybersecurity Budget: Focus on integrating AI security to reduce customer costs.
- AI Investment: Shift from compute to data value extraction.
- Google Cloud Announcement: New storage capabilities enhance workload flexibility.
- Competition: Emphasis on ONTAP platform for data management and cloud availability.
For a detailed understanding, readers are encouraged to refer to the full transcript below.
Full transcript - NetApp INSIGHT 2025 Investor Tech Session:
Chris, NetApp: Hey, everyone, and thank you for joining us for the investor session at NetApp Insight. Hopefully, you were able to attend the keynote this morning or watch it on the webcast. If not, you can catch it on replay. Lots of exciting announcements for you. Before we get started, I'm going to read a safe harbor. For that, each of the 2025 Insight Financial Analyst Tech sessions may contain forward-looking statements and projections about our strategies, products, including unreleased offerings, future results, performance, or achievements, financial and otherwise. These statements and projections reflect management's current expectations, estimates, and assumptions based on the information currently available to us and are not guarantees of future performance or products, services, or features. The development, release, and timing of any feature or functionality for NetApp products and services remain at the sole discretion of NetApp and are subject to change without notice.
Actual results may differ materially from our statements or projections for a variety of reasons, including macroeconomic and market conditions, global political conditions, and matters specific to the company's business, such as changes in customer demand for storage and data management solutions and acceptance of our products and services. These and other equally important factors that may affect our future results are described in reports and documents we file from time to time with the SEC, including factors described under the section titled Risk Factors in our most recent filings on Form 10-K and 10-Q available at www.sec.gov. These forward-looking statements made in the presentations are being made as of the time and date of the live presentations.
If the presentations are reviewed after the time and date of the live presentation, even if subsequently made available by us on our website or otherwise, these presentations may not contain current or accurate information. We disclaim any obligation to update or revise any forward-looking statement based on new information, future events, or otherwise. OK, with that out of the way, we have an exciting agenda for you today. You'll hear first from our CEO, George Kurian. He'll give you a recap of the general session. Shyam Nair will come and talk about some of the exciting AI innovations that we've made around the NetApp data platform, specifically NetApp AFX, which is the disaggregated ONTAP for exabyte-scale, and the AI Data Engine, a foundation for Gen and Agentic AI. Gagan Gulati will come and talk about cyber resilience. Today we announced Enhanced Ransomware Resilience Service.
You'll hear from Sandeep Singh around data infrastructure and modernization, how we're helping customers streamline costs and operations. One of the announcements we made today is the Shift Toolkit, which provides near-instantaneous conversion of our VM reformatting across hypervisors, important in this day where everyone's trying to move around their current hypervisor. Pravjit Tuana will come and talk about cloud transformation. Today we announced a number of features that will expand our opportunity in the cloud, including block for Google Cloud NetApp Volumes and support for SnapMirror and FlexCache across all hyperscalers. Finally, we've got a really cool customer panel with customers from the San Francisco 49ers and Levi’s Stadium, the NFL, and Aston Martin F1 team. With that, I'm happy to introduce our CEO, George Kurian.
George Kurian, CEO, NetApp: Thank you, Chris. Welcome to all of you. Welcome to NetApp Insight. We have a super exciting agenda over the course of the next few days, talking about how we are continuing the pursuit of our mission, which is to help our clients unlock the power of their data using the widest range of applications possible. You know, we started that journey with network file storage, where workgroups wanted to share data. We brought that to enterprise scale with the unified data storage platforms that we introduced many years ago. We brought it to hyperscale scale with our hybrid cloud solutions, our hybrid cloud data fabric.
What all of this was really driven by is the idea that the best return on investment on your data and the infrastructure that holds that data is that you can seamlessly connect it to all of the sources of innovation and services in the world. The latest group of those services is really large language models and multimodal models, which is the AI landscape. AI itself relies on good quality data. Right? I think you all know that. The biggest challenge with using AI effectively is how do you actually manage the data, organize it, curate it, and feed it into these AI models in a transformed manner. Right? This idea of going from your enterprise data, which is created out of the applications that run your business, to AI-based, AI-ready data is called a pipeline. A pipeline is essentially a series of steps that we talked about.
The challenge with pipelines has been the classic way of building pipelines were created for unstructured data. This is data that is out of databases or data warehouses, where the schema of the data is already well defined, meaning the structured data typically is in some table format. You've got a description of the data in the structure of the table. You've got access controls and governance rules built into the way the table operates. Right? For unstructured data, you don't have a schema. You have to generate a schema. You have to generate that schema using technologies like LLMs. The second is the volume of unstructured data. The change rate of unstructured data is an order of magnitude larger than for structured data. A large database, for example, is a few terabytes. A single media file can be 100 terabytes.
There's just literally no comparison between structured data and unstructured data for that. One of the challenges that clients have is if they use the classic approach, they have to copy all this data into an application for annotation, then into another application for unification, and then another application for transformation. It's insanely expensive and complex. It is extraordinarily hard, if not impossible, to carry forward data security, access controls, lineage, all of the things that you want, which makes traceability very, very hard to do. Let's say you run a model and you have drift in the model's results. You have no way to figure out what changed the source data. We envision what we call a data platform that is built to accommodate all the types of data in the world. Right? We see three things in there.
The first is the idea that you will have multiple data formats on which your applications want to operate. You will have enterprise data formats. These are your classic file, block, and object, where traditional enterprise applications want to access data. You'll see Gen AI and Agentic applications that want to use what's called a vectorized embedding or a tokenized data format. The third is you want to have metadata operations on what's called a canonical data format. This could be like an iceberg table, or it could be a JSON representation of file data. The data platform needs to support all of them. What we have done, which is unique in the industry, is we are saying, hey, you can keep your source data in one place with one kind of copy. That's the original source data. You don't have to create multiple copies of it.
You can present it in different ways to the different applications. You can present it in a canonical way, for example, in an analytics application that wants to use Spark. You can present it in a vectorized manner for LLMs. You can present it in the classic file, block, and object format to a traditional application. A lot of our original IP in ONTAP allows us to do this. What this enables you to do is to massively simplify your pipeline. You can make the pipeline. You don't have six copies. You have one copy, which is the original data. You can maintain security, access controls, lineage. Everything is able to be built in as you transform the data. Importantly, you can keep the source data and all its transforms in the same volume so that if you delete the source data, you automatically delete all its transforms.
If the source data changes, we have a data change detection engine in ONTAP that allows you to update the catalog of data and say, hey, these models need to be rerun because the data changed. There is lots of intellectual property that we have built for a long time that helps us do that super, super efficiently, way more efficiently than any other solution in the market. Today, you've got a lot of dumb storage systems with a stupid parallel file system on top of it. They go fast, but they can't do any of these transforms. They will say, oh, for transform, you've got to feed it up into another pipeline. Now think about how much time it takes to extract, let's say, 20 terabytes of data from a storage system, copy it up into an annotation system, and rewrite it back into the parallel file system.
That's the most braindead idea I've ever seen. You know what? We are like saying, hey, you want performance? We'll give you performance. I'll talk about how you do that. You need the data management. Because without the data management, you are basically doing batch data copies everywhere. The second element of what we announced was what we think is a new class of data infrastructure, which is you've seen newer technologies that are essentially what you call memory speed fabrics, where you've got memory speed connectivity across network fabrics. What this allows you to do is build systems that are highly flexible. You can combine processing, memory, persistence, or storage in flexible ways.
What we've done is we've architected a disaggregated system that combines data access nodes and data retrieval nodes, which are classic storage constructs, with data processing and transformation nodes, which are GPUs or CPUs, within the same trust boundary of data access. How does that work? What that does is it allows you to do the activities that you need on the data. For example, on unstructured data, we said you need to actually enrich the data so that you can get the metadata. It isn't created out of the gate. You can actually now run AI models against it. You can run them on the GPUs. It is completely within, it feels to the data that's resident on the storage like a trusted user is accessing it. All of your security and access controls, your protection, your guardrails, all of that's carried forward. That's really the two big things.
We feel like, hey, a lot of the data platforms in the world, like data lakes and warehouses, were built for structured data. They're not going to scale for unstructured data. To really manage unstructured data and, in fact, all forms of data, you need to embed the intelligence right where the data is created. We've done that for security. We did that for storage efficiency. Now we're doing it for data transformation and enrichment. To do that effectively, we've created a composable system that allows you to mix and match data access and data transformation nodes in one unified system architecture, as well as a suite of software services that combines our tech with tech from NVIDIA that allow you to process the data and transform it in place without copies. We're super excited.
I was out at the Expo show right after the conference, and I can just tell you how gratifying it is to have two or three clients walk up to me and say, hey, you were listening to my problem. You got the perfect solution. I'm super excited. One was a pharmaceutical company. The other was a manufacturer from Germany that we have worked with for many years. They said, you know what's especially unique about NetApp is you guys keep making ONTAP better and better so that the investment we've made in your tech, now you're making it available in so many new ways. Thank you for coming. Have an awesome conference.
Chris, NetApp: All right. Thank you, George. I appreciate that. Now I am happy to introduce to you our new Chief Product Officer, Shyam Nair. He'll tell you a little bit more about what we're doing in AI and give you guys an opportunity to ask questions.
Shyam Nair, Chief Product Officer, NetApp: First off, thank you. Thanks for coming. Hopefully, you had a good start today with the conference. Thank you. Look, first, I'm new. I'm super excited, super excited because three things. One, this is the time where customers are really looking at navigating two secular trends. Cloud, it still continues to be a journey for most enterprises. AI transformation, everybody talks AI. You hear about it everywhere. Most of the investments today are on the compute side of it. The real value for AI comes from data. Data is growing significantly. Not like the Hadoop days. Now we are talking about LLM models, machine-generated data, growing unstructured data. This is actually an explosion where a lot of value sits in the data. It's really, really, really hard to get value out of the data.
Anybody who has worked across the data industry for anything knows that the overall processing of data is where most of the complexity, cost, time spent is. Right? We can change the game by bringing in this intelligence that George Kurian talked about to the platform. Intelligence into the platform does mean, like, think about this in the future. Every unstructured data set is like a database. If you know about Iceberg, imagine everything is an Iceberg table. You can just query, run search, semantics, data models, vertical models on top of it. Look, it's just not a vision. It's something that we are executing towards now with our AI Data Engine (AIDE). That's where we have actually built in a metadata engine on the NetApp ONTAP platform. We have a vectorization on the NetApp ONTAP platform.
We have built-in guardrails for AI security because a huge challenge for most of the people who are actually trying to get anything out of AI is how do you secure that data? How do you protect the data? These things are built into the platform. I think it's a unique opportunity for us, a unique opportunity to serve our customers who actually have this data set across industries, whether it is media and entertainment, pharmaceutical, manufacturing. Across industries, this is the same problem that we get a chance to solve. Most of the announcements today for us were actually moving on to the journey. One of the other key differentiators is the flexibility we give customers. I'm really proud to say that nobody else can do this because we have the same platform running across any of the hyperscalers and on-premises.
Data grows in hyperscalers and AI factories and on-premises. They are not going to be moved from one place to the other. Keep the data where it is, like technologies like FlexCache that we have built into the ONTAP platform, SnapMirror. Data for AI can move to the edge without actually copying the data, like data sprawls. I don't know if you talk to customers. I've talked to several customers who have said sometimes, no exaggeration, sometimes 60% to 70% copies of data that they don't even know where they are because every departmental data is moved on. Some are moved on to the lakehouse for harmonization, modeling, et cetera. We can cut all of this and really bring the data to that. That's the power of AIDE that we are actually building. I've been part of the data outcome technology for some time.
After my operating system career, I was mostly with databases, NoSQL, big data. Many of you may know it was a big thing. You know Hadoop was going to change the world. It did. It did actually create a new set of applications. Cloud changed the world. It did. It did create a new set of applications, a new way of managing data. Some of the challenges in terms of getting value out of unstructured data continued. Now is the moment because there is compute. Now is the moment because there are storage capabilities that are intelligent, where you can directly get intelligence and analytics out of it. That's the excitement I have in terms of what we are embarking on. AIDE, AFX, a true disaggregated storage. This is where we are actually building disaggregation on top of all the data management capabilities.
You disaggregate compute and storage, but you need all the data management capabilities. That's the most precious thing for most of the customers. It's the data. It's the semantics of what is in the data. Each file is just a file without the metadata. Once you look into the metadata, it has lots of precious information. How do you protect your IP? How do you protect the attributes that are in the data? How do you actually make sure that's your key value that you can continue to protect? These are all things that data management, AI security capabilities built into the platform provide. AFX plus AIDE, the cyber resilience capabilities that we have had, right, everybody world over. I came from a cybersecurity company before this. Most of the threats in cybersecurity come from AI. AI-driven threats are growing. I don't know if many of you know this.
The average time for a threat to break out is two minutes. Many of the people who say, oh, once it happens, I can actually figure it out. No, harm is done. You need to protect it before it happens, which means that protection needs to sit where your precious commodity is, where your precious asset is, which is data. I think super excited, the fact that we have NetApp AFX from a disaggregated storage standpoint. We have AI Data Engine (AIDE), which is the data engine built in, the intelligence built in, cybersecurity built in. I think it's a huge growth opportunity for us to build that capability and grow that capability. Then cloud across hyperscalers. Now, again, nobody else has this, where all of the hyperscalers have all the functionality.
What I was showing at the keynote, I don't know if you got a chance to watch that, where once I copy file, it can show up in every place where I can do read, write, but I'm not really copying it across there. I can use FlexCache to bring data where the compute is. That's actually a phenomenal thing. It was actually built years ago for NetApp ONTAP. It was built years ago. It wasn't built for AI. Now is the time where everybody can leverage it because customers are adopting cloud. That's where most of the AI adoption is going. Customer data sits on premises because there is a lot of data over the years. This is their IP. The opportunity for us is big. I look forward to continue to innovate in that space, create new business opportunities for us to grow.
I've never been this excited in terms of what we can actually do to delight our customers, continue to keep the trust of our customers, make sure that customers are successful. Super excited to be here. Thank you. I'm happy to.
Ananda Baru, Loop Capital: All right. Let's get some questions going. Ananda in the back.
Hey, thanks. Thanks a lot for doing this. Yeah, and great keynote this morning as well. I guess Ananda Baru, Luke Capital. Could you describe to us how we should expect this, like the manifestation of this, to begin to show up in the business? Where is that journey today? Is this really what we're talking you guys are describing, you and George? Is it really a share gain story? Or is it a market share story, maybe more appropriately, as this whole dynamic gets going? Thanks.
Shyam Nair, Chief Product Officer, NetApp: Yeah. I think it is on four fronts. One I would say is AI-ready storage, it's still a challenge for most customers. Everybody talks about there are new stacks, but customers already have infrastructure that they are leveraging. How do you make those storage AI-ready? That will be a share gain in the context of across blocks, files, and objects. We can provide that one platform, reducing the complexity of managing multiple systems. That is going to be a share gain story for us. The second one is a share gain, as well as additional value, is going to be cyber protection. I think most of the money is being spent in terms of cyber protection outside of infrastructure. AI gets all the hype, but cyber, all the way from zero trust to making sure data security.
Data security is one of the biggest problems that is facing the industry today, especially when it comes to AI. When you talk to customers, you find that most customers tend to either open up or close down. There's no middle ground out. Most of the AI security value they get today is visibility, what is being used. Many of your enterprises would also have the same challenge. We can bring in AI security directly to the storage, directly to the data platform. That should give us an uplift, both in terms of storage as well as value add that we are providing customers, because customers can now take away additional tools and expenditures they have to rely on the platform. That's the second part of it. Third is going to be the AI Data Engine.
AI Data Engine is going to reduce the complexity of what customers have to do to make their data AI-ready. George talked about how lakehouses and the overall, if you are familiar with the bronze, silver, gold model of creating data, harmonize it, model it, then create entities and find value on top of it. I think there's a lot of work that customers are spending there that they will see value in not having to do. I'm hoping that will bring us more additional value add in terms of what we're delivering. I would say storage is one, but cyber and AI Data Engine would actually drive more.
Ananda Baru, Loop Capital: Thanks, sir.
Chris, NetApp: Ananda's got a follow-up. We'll get to Lou.
Thanks. That's great. Just a quick specification. For cyber, do you think you capture cyber budget from other folks? Is that how you see it? In your experience, where do you think customers are right now with their proof of concepts and their journey to inference? That's it. Thanks.
Shyam Nair, Chief Product Officer, NetApp: I think two, both on the cyber and the AI front. I don't envision us being a cybersecurity company. It's more about bringing that value so that our software, our hardware, our systems are more valuable. It becomes a premium for what customers have to spend on us, reducing their budget somewhere else. I'm not looking to be a pure cyber play company because it's not the core competency that we are in.
Lou Mitsocha, Daiwa Capital Markets: Let me delegate a little bit off of what Ananda just said. This is Lou Mitsocha at Daiwa Capital Markets. On the last earnings call, George talked about 125 AI infrastructure wins. I'm just trying to understand when customers have their proof of concepts, they're trying to do something, what's holding things back in the sense of, I mean, we do hear about a lot of applications. There's an awful lot of money, obviously, being spent with this infrastructure. I'd really like to see and know for enterprises that are really starting to move forward in size, which obviously would then justify all the investments that are being made or the justifications for the investments only really with the big cloud and the mega tech companies.
Shyam Nair, Chief Product Officer, NetApp: I'm not sure that there is holding back per se. I think most of the spends have been more on the compute side of it, experimentation side of it. I think customers are seeking value in terms of trying to get this value out of data. I think it's more an opportunity for us that we'll continue to see. We are growing really well in object storage because object storage for lakehouses is a key ingredient for AI. With NetApp AFX and AI Data Engine (AIDE), there's a keen interest from customers wanting the disaggregated storage and being able to build AI engines on top of it. Just a proof point in terms of our own AI session, I don't know if it was overflowed, but the room was overflowed. People are talking about, look, this is what I've been looking to. People have a lot of interest.
Customers have a lot of interest to see driving AI value directly out of it. I think it'll grow for us significantly over the next few years.
Lou Mitsocha, Daiwa Capital Markets: OK, thank you.
Hi. Kat Campagna from Goldman Sachs. Shifting gears a little bit, I wanted to ask a question about today's announcement with Google Cloud and the new block storage capabilities that you talked about. Why was this important for your customers? What does this really help? How does this change the outlook that you have for public cloud growth into next year?
Chris, NetApp: We do have a cloud expert coming later today.
Shyam Nair, Chief Product Officer, NetApp: Pravjit will talk about it. A short summary of that is, look, customers, as they're migrating, they're leveraging cloud more and more today than before. Customers are seeing that flexibility and scalability extending to the cloud. Every customer has a cloud choice or multiple cloud choices. The unified platform where customers see the simplicity of being able to leverage both NAS and SAN is important for them when they think about workloads. Virtualization is a good example of workloads that they're moving to the cloud, where having block capabilities actually helps us. New AI projects, one of the things that is stopping AI projects is the capital-intensive nature of it. Many of the customers are leveraging new AI projects in the cloud. Having block capabilities actually helps there.
Chris, NetApp: All right. We have a question from Wamsi.
Lou Mitsocha, Daiwa Capital Markets: Thank you, Shyam, for doing this. Wamsi Mohan, Bank of America. I was wondering if you could just talk about your announcement around the new DGX SuperPOD, like you're qualified for that. I think back about six months ago, you had AFF A90 that was qualified for that. How are customers looking at this versus the prior? Is there any difference in the software offering? Is it purely the scalability across maybe compute and storage that's different? How are you positioning that for the market? Thank you.
Shyam Nair, Chief Product Officer, NetApp: Yeah. It is more on the capability of there are still workloads that don't need a disaggregated architecture. There are also workloads that are very focused on disaggregated architecture, especially where you have to checkpoint when you're doing model training, et cetera. The NetApp AFX and the DGX SuperPOD certification helps us actually get into a market that we weren't there much because NetApp AFF could actually help in terms of the inference one. It's more an expansion. The other thing is it's also a proof point for us in the context of as compute and storage and compute and data platforms are getting segregated, we can actually play a full fidelity role in the NVIDIA ecosystem.
We are working with what I would call as neoclouds or AI factories that are coming up, trying to be the partner there because we actually have the assets now to be with NVIDIA and go and win those games.
Chris, NetApp: All right, we have a question from the webcast.
Lou Mitsocha, Daiwa Capital Markets: This is on behalf of Ann Rakers at Wells Fargo. Just given AFX is a new and incremental part of the NetApp portfolio, how should we think about sizing the TAM opportunity that new AFX systems and platform address? There's a follow-up too.
Shyam Nair, Chief Product Officer, NetApp: I don't know if I have an answer for that, Sandeep.
Chris, NetApp: I think we'll see if someone later can answer the TAM question. If not, I'll do some research and get back to you, Aaron.
Lou Mitsocha, Daiwa Capital Markets: In terms of NetApp AFX, who do you see as the key competitive platforms? Is it Dell, Project Lightning, VAST, Weka, Pure? Could you kind of expand on that?
Shyam Nair, Chief Product Officer, NetApp: I think from my perspective, it is an opportunity for us to gain. I'm not looking at this from a pure competitive standpoint. There are competitive products out there, but our product, because we are building on top of the ONTAP platform, all the data management capabilities that we have, as well as the fact that whatever we're building as a platform is available in every cloud, that differentiates us. To me, it's not a competitive play. We will be the best platform for customers to solve this. That will actually help us grow.
Chris, NetApp: All right.
Lou Mitsocha, Daiwa Capital Markets: Yes. Maybe it was Sandeep Singh. Two follow-up. If I just step back and look at the announcement today and look at all the comments you made, would it be fair to say that the vast AI opportunity for NetApp is still focused on enterprises? I understand most of the investment so far has been on compute. Many of these investments are also enabling native data, native workloads to benefit from AI. I didn't hear anything that would give me confidence that you have actually expanded your exposure to hyperscalers. Perhaps it is the enterprise, given your ONTAP install base and additional products you introduced today, that would actually help you with the incremental opportunity on the AI side. Would that be a fair way of summarizing this? I have a follow-up.
Shyam Nair, Chief Product Officer, NetApp: I would say both. I think it is fair that we are strong and we will expand. When Pravjit comes, he can talk about the number of new logos that we actually have on the cloud. These are not NetApp customers. We are bringing in new customers on first-party NetApp offering in the cloud, and many of those workloads are for high-performance and AI-related workloads. I think there's a growing trend of using AI within the cloud that will also drive, given some of the innovations that we have built.
Lou Mitsocha, Daiwa Capital Markets: OK. I heard you talking about data management. Is this a new focus area, especially with enterprise opportunities related to AI? Are you trying to expand your install base of storage and add another layer of value-add services?
Shyam Nair, Chief Product Officer, NetApp: Yes. I talk data management in two contexts, just to be. There is the storage-based data management capabilities that were built into ONTAP. That's part of the ONTAP. There are also functionalities in terms of cyber protection, which is a good example of it, where data has policies and those policies travel with data. Now, when you think about AI workloads, that's going to be something that is going to actually accelerate adoption of the AI workloads. When I talk data management, I'm talking about the core data management capabilities. This data engine and some of the other capabilities that we are running, those are net new, enabling newer workloads, new scenarios for customers. There is both sides of it.
Lou Mitsocha, Daiwa Capital Markets: Does that mean that you will go back and provide a mix of hardware and software within your product revenue?
Chris, NetApp: Essential update.
Shyam Nair, Chief Product Officer, NetApp: Hi, I don't know.
Chris, NetApp: Yeah. Not a question for him. All right, Tim.
Lou Mitsocha, Daiwa Capital Markets: Thank you. Tim Long at Barclays. Just wanted to get back to, I think you've kind of touched on these a little bit, NetApp AFX and AI Data Engine. Kind of newer solutions. Can you talk a little bit about Salesforce channel customer education to realize the benefits here? As a result, does that mean a little bit longer path to revenues or to deployment than some of the existing technologies? I'm just curious if in one or both of them, do you think there's a little bit more of a software maintenance, software services bent? Or is this more similar to some of the other product innovations? Thank you.
Shyam Nair, Chief Product Officer, NetApp: Is Cesar Cernuda Rego or Dallas being on?
Chris, NetApp: Yeah.
Shyam Nair, Chief Product Officer, NetApp: I'll touch base on it. I may not be able to answer the whole question. Chris, you may be able to follow up. For us, both cloud and hybrid cloud in the context of it, what we are building as a product capability is going to serve both the customer bases. That becomes added value for us. We are working with ANF, Azure, Google Cloud NetApp Volumes from a Google standpoint, as well as AWS to make sure that all of the platform capabilities are running there. As an example, AI Data Engine (AIDE) that we have delivered is built on top of NetApp ONTAP. It's actually available as part of a system that we are selling, NetApp AFX, and with NVIDIA nodes. The same software can actually open up the entire ONTAP estate, be it on the cloud, for customers. That's going to be additional value add.
The exact go-to-market motion and business, I don't have an answer. Chris, you can.
Chris, NetApp: Yeah, we can follow up. You probably should expect to hear more on the earnings call about how we're taking this to market. All right. Sameek.
Lou Mitsocha, Daiwa Capital Markets: Hi. Sameek Chip, Mohan. Maybe if I can go back to NetApp AFX. You described it in your keynote as meant for exabyte-scale level storage at that point. More directly, does it really position you differently with some of the new clouds that have been looking at these opportunities? Does the scalability here position you differently? A follow-up on the cyber resilience or cybersecurity as well. Mentally, I'm thinking you're just going to go up against companies like Rubrik, Cohesity. Why would the customer then think about allocating some of the budgets that were addressed to those companies over to maybe paying more of a premium for your service?
Shyam Nair, Chief Product Officer, NetApp: OK. I'll cover the product side of it, not exactly the financial aspect of it in terms of, so the NetApp AFX one, yes, it does position us well with some of these. You think about these AI factories, giga factories that are forming across the world, like sovereign clouds. It does actually position us much better to have a good footprint, because now we are providing not just, I think George used the term dumb storage, it's actually smart storage that can actually help customers drive value out of it. It does position us well. We are working with some of the partners in terms of how we become part of this ecosystem. It's in early stages of it, but technologically, it positions us well. Cyber, I think it's a two-prong thing.
Look, we want to be the best cyber, not just from a resilience and data protection standpoint, but from an AI security standpoint built into the platform, which should make our platform much more easier to use, reducing the complexity for customers, much more secure, gaining trust. That itself is an added advantage to us from a platform. What we want to work with is we want to work with the ecosystem vendors. There are others out there to be able to leverage our APIs. We want to work with open ecosystem with the partners. We're not looking at, from a product positioning standpoint, this to be an alternative. How exactly the dollars move around, I think we'll have to figure it out. I think technologically, it'll make us much more advanced and ready to be the platform of choice for most of these workloads in the future.
Chris, NetApp: All right, Steve.
Lou Mitsocha, Daiwa Capital Markets: Hi. Steve Fox with Fox Advisors. I guess I'm still a little bit confused on just the competitive advantages that you're laying out. You've had the advantage with ONTAP. You've had multi-cloud and on-premises. What is different or what came together in terms of these product announcements today that you're bringing together and giving more?
Shyam Nair, Chief Product Officer, NetApp: Yeah. Thanks for that question. I'll clarify. The disaggregation allows our customers to scale performance and capacity independently, which was a challenge. Many workloads, not just AI, need it. Media and entertainment is a really good example of it, where there are lots of bytes, and they want performance from a different standpoint. That actually opens us for multiple new workloads where we have been playing, but we become a major player in that space. That's number one. The AI Data Engine (AIDE) in itself is, look, today, customers do get value out of it because it's complex. The pipeline that George showed, if you were at the keynote, that's real. That's actually from a particular customer and many customers. I've been in many of those shoes. It's really hard to move data. Multiple data copies and transformations happen to really make it meaningful.
AIDE takes that away, and that is now available across the platform everywhere. For customers, I think the differentiating factor is if the data is in the cloud or on-premises, immediately, all of this data is accessible for AI without having to do that complex processing, complex set of pipeline that is needed. That's a huge opportunity because it reduces costs for customers. We provide more value. It reduces the complexity, and it also actually helps them transform their AI projects much more faster. Industry analysts have been talking about that. It depends on who you listen to. 40% to 60% or 87% of the AI projects in reality in enterprises are not succeeding. Most of them are not succeeding because the data isn't ready. The infrastructure is not connected for AI.
These two innovations that we showcased today and that we are delivering actually connect customer storage and their data, especially their unstructured data estate, to AI so that they can drive value. I think it's a huge innovation from that standpoint for us.
Lou Mitsocha, Daiwa Capital Markets: All right. Frederick in the back. How are you doing, Frederick? Gooding with William Blair. There was a nice slide in the keynote earlier. It was talking about the combination of data management services, the metadata engine, and then also unified data storage. I'm curious, do you see any of those specific segments driving either more interest, customer demand, or as a higher competitive advantage? Do you think it's more the fact that all of these are integrated within a platform together and that really sets you apart from anybody else?
Shyam Nair, Chief Product Officer, NetApp: I could have, with all of the innovations, gone and claimed we are the new database. I'm trying to do and claim something that it is not. What I really, really want to make sure that everybody, all our customers, understand is the amount of work that it takes to get value out of this is simplified now because we are able to do it on the platform. It's additional compute. It's innovation. It's on the platform. I would expect it to drive both, which is to make us the platform choice, also drive more value towards our platform because now you don't need to do all the other aspects of it. It is going to be a competitive advantage purely with the competitors we have.
It is also going to be a competitive advantage in terms of winning workloads because there are many other steps that can be eliminated for the customers. Does that answer the question?
Lou Mitsocha, Daiwa Capital Markets: Yes.
Chris, NetApp: All right, Ananda.
Hey, thanks. Quick follow-up. To your comments about starting to have conversations with the neoclouds, do you know if there's any distinction to make within those conversations between training opportunity and inferencing opportunity?
Shyam Nair, Chief Product Officer, NetApp: Yes, there is. I think, and this is something from a product standpoint we are also looking at in terms of many of the major needs that are actually driving some of these data center and neocloud investments are training opportunities. In many of this training, the consistency of the data can be eventually consistent. If you think about the old database world, there is the atomic consistency and the eventual consistency. How you checkpoint, if you go into the technical, how you checkpoint, et cetera, it's a little bit more lax. The ONTAP platform is actually built for full consistency. This is about consistent data. We are also looking at where needed, tune it or provide that offering so that customers can leverage those eventual consistency at different checkpoint needs.
Over time, I think as years go by, maybe quarters go by, my sense is, and I think this is what the industry expects, more and more of this will move towards inference. It's only so much you want to train models. These models are changing on every other day. Models are changing. There's a lot of money with the big players actually training models. It's going to be more inference. We want to be, and we are ready for that world where customers want to infer on the data that they have. It's a two-prong one. I'm seeing more opportunity on the inference side of it rather than the training.
That's great. Thanks.
Chris, NetApp: All right. Any more questions from anyone?
I have one question for you because it's a question I get from these guys all the time. I'm surprised it hasn't come up. As you mentioned earlier, we see a lot of investment of AI on the compute side, and we all know that data is important to make AI valuable. Why haven't we seen a commensurate investment in storage so far, and what do you think would drive that?
Shyam Nair, Chief Product Officer, NetApp: I think it's just timing. One, in the context of, look, if you look at, I don't know which analyst.
I was going to use that.
AI-defined storage and AI is at the peak of the hype cycle. Right? As more and more production workloads start executing and become live, we will see a lot more importance of data and storage. It is not going to be purely a capacity game. It's going to be about that balance between performance and capacity. I would expect to start seeing that. This is why the timing is right for us at NetApp in terms of what we are delivering. Look, this is the right time to actually have NetApp AFX, AI Data Engine. We, like Sandeep is here, my colleague, are going to go after customers and talk to customers and showcase this value and get great wins. I'm super excited. It is going to happen.
Chris, NetApp: All right. That's a great note to end on. Thank you very much for your time.
Shyam Nair, Chief Product Officer, NetApp: Thank you. Thank you, everybody.
Chris, NetApp: All right. Thanks, everyone, for your questions. Our next presenter is new to you. He's not presented to the financial community before, but he's been with NetApp for several years now, three, several. That's right. Gagan Gulati heads up our value services, and he's here to talk and focus primarily on cyber resilience. With that, I'll introduce him and then let him say a few remarks. Then you guys can open up with questions.
Gagan Gulati, SVP and GM, Data Services Group, NetApp: All right. You can hear me? All right. Perfect. My name is Gagan Gulati. I'm the SVP and GM for our Data Services group.
Shyam Nair, Chief Product Officer, NetApp: I'm Nasser Khouri.
Gagan Gulati, SVP and GM, Data Services Group, NetApp: As Chris mentioned, today I'm going to talk mostly about.
Shyam Nair, Chief Product Officer, NetApp: Oh, there is a mic outside. OK. Today I'm going to talk mostly about cyber resilience and what we're doing there. I want to talk about a few things that we are introducing and working on. First is secure by design. NetApp, we believe, is the most secure storage on the planet. We have been working extensively to keep it that way. In the secure by design category, we announced a bunch of new capabilities around post-quantum cryptography. That's pretty cool. Our customers are loving it already. Second big thing, over the last couple of years, we have worked extensively on a piece of capability that's built into ONTAP called ARP, or Autonomous Ransomware Protection, with AI models built in. It provides our customers with real-time built-in capability for anomaly detection.
It can, with 99%+ accuracy, detect ransomware attacks and then take snapshots as need be, and then inform, alert the security operations team about these alerts. It's been a functionality and capability that's growing rapidly. It's the fastest growing feature that has been consumed by our customers today. What we have delivered recently and we announced is that this ARP AI capability is now available for all of NetApp data estate, whether it's for the file workloads, block workloads, including cloud. We have also announced this capability working with AWS for Amazon FSx for NetApp ONTAP. We're bringing it everywhere. The ARP AI portfolio continues to grow. It is helping our customers secure their data estate against the biggest problem they have today, which is cyber attacks. That's number two. Third, ransomware resilience.
We are bringing to light a brand new service, a value service on top of our ONTAP platform that we call the Ransomware Resilience Service. This Ransomware Resilience Service, we are introducing two big capabilities today. Again, the first in the industry to deliver what we call data breach detection capability. Most of you guys know that when it comes to cyber attacks, the world is moving or the attackers are moving towards what they call double extortion attacks. They will first make a copy of your data, export it, and then they will encrypt the data and then charge you ransom for both of them, one for decrypting the key and one for the data that's exported. They'll say, I will give it back. They may never give it back and use it and harvest it later.
What we are announcing today is the ability for our Ransomware Resilience Service to help our customers with this capability that we can actually go and detect data breach attacks. We, of course, do it in real time, not after. We do it as it is happening. As we detect these attacks, we will generate alerts. We work with our own UEBA, or user entity behavior analytics tools. We work with our partner companies who do network firewalls. We, of course, integrate with the likes of Cisco, Splunk, with whom we are announcing a pretty big capability, where we are working with Cisco, Splunk SIEM, and also their SOAR capabilities, which is security orchestration and response capabilities. There is bi-directional work. That is an amazing set of capabilities that we are announcing today with Ransomware Resilience. That's number one.
The second big capability we are announcing that our customers have been asking us for is what we call the isolated recovery environments. As you know, when these attacks happen, there is no guarantee that when you are recovering from them, the backup copy from which you are recovering is safe. Studies show that 75% of customers who have a ransomware attack end up getting attacked again, and a third of them actually by the same attacker. Why? Because when you recover from these attacks, the copy that you're using is already malware infected, because the attackers have affected not only your primary copy, but also your backup copy.
What we are delivering today is this ability to be able to recover from these attacks with confidence, because we run all kinds of malware detection, wiring detection, all of our strength of data science on these backup copies and snapshots to make sure that when we recover them in an isolated environment, these snapshots and these backups are safe, and these are not infected. That is the second big capability we are delivering as part of the Ransomware Resilience Service. Now, shifting focus away from capabilities to where we are taking our premium value services, these capabilities are, of course, made available as part of our ONTAP licenses, ONTAP One licenses, but also something that we make available to our customers to purchase through Marketplace.
That is one of the areas we are very focused on to make it easy for our customers, our partners to purchase these directly from the Marketplace, hyperscaler marketplaces like AWS, Azure, and Google. We are also announcing today and tomorrow this new Ransomware Resilience Service. We want all our customers to use it. We are announcing a six-month trial of this service for free for all of our customers up to a particular limit. We hope that all our customers utilize the power of Ransomware Resilience against their NetApp data estate with this capability. With that, I'm going to stop and take questions from you. Thank you.
Lou Mitsocha, Daiwa Capital Markets: Hi. How are you doing? Frederick Gooding with William Blair. I'm curious, how should we think about this ongoing convergence between storage, DSPM, and backup and recovery? You guys just announced the isolated environments. I'm curious, do you think it's more of, all right, we need to consolidate everything together? Like on top of the storage environment, we need the recovery, we need the backup, we need the data classification? Or is it more building interoperability with all of those different types of capabilities?
Shyam Nair, Chief Product Officer, NetApp: Oh, it's a great question. Look, I mean, security is best done in depth. What I mean by that is every customer of ours, when they have to protect themselves against these cyber attacks, they start from the top, network security, perimeter security. You have a bunch of firewall implementations that you have to have to make sure that you are protecting against network attacks. You also have to make sure that you have identity security so that, end of the day, it's the compromised users and those identities that get used everywhere. Number three then is about data security. At the end of the day, it's about those crown jewels. This data sits on storage. Storage, therefore, becomes the last line of defense when everything goes wrong. It doesn't mean that you only implement that at a storage level.
You just can't have data protection done at storage level, security at storage level. You have to make sure that you are securing all different layers of the stack, if you want to call that. That's number one. Number two is, it takes a village. To the example you gave, data classification is super important to get right so you know what's sensitive and what's not sensitive. What's sensitive is what you want to prioritize and go protect first. At NetApp, we offer a capability called NetApp Classification that allows our customers to classify data. At the same time, we work with various different classification vendors, DSPM vendors today, to make sure that they can efficiently run data classification and security from an example of a DSPM or similar other examples like DLP on top of the data that is available on NetApp Storage.
Of course, they can do it in the most generic way, like NFS protocols, just files. Or they can utilize the best of what NetApp offers in terms of our own data management capabilities. They can integrate directly with us. DSPM, DLP vendors, that's just one part of the picture. The second part of the picture, of course, is data protection vendors and the entire ecosystem, whether it's the likes of Rubrik, Commvault, Cohesity, and others. They, at the same time, same story there. We want to make sure that our customers get the best data protection, the best cyber resilience. We work very closely with data protection vendors and ISVs as well to make sure that the whole partnership utilizes the best of data management from NetApp so that the customers get the best defense against such attacks.
We want to make sure that we offer the best we can offer to our customers. At the same time, we have an entire partner ecosystem that together we help our customers secure their data and keep them resilient.
Lou Mitsocha, Daiwa Capital Markets: Hi. Maybe just going back to, and you referenced this a bit, and summing Chip, Mohan, if you can talk about, firstly, what's the current solution that enterprises are using? When you sort of think about competitors in this field, who are you going to look to really displace on that front? When you're charging it as more of a premium service, how do you sort of envision that going? Would you, would this, you said sort of you want to do it more in depth. Probably the competition is where it's more going with the breadth rather than the depth on that front. Would you evaluate in the future going and doing this on a different storage platform or a third-party vendor storage platform? Obviously, that won't give you the depth that you're looking for.
Is that something that gets you more into that pure play competing in that area?
Shyam Nair, Chief Product Officer, NetApp: That's a great question. Number one, our first and biggest job is to drive preference for NetApp Storage. For us, it's about making sure that our customers who have their biggest crown jewels, their biggest workloads running on NetApp Storage are safe. We are basically, therefore, building both the platform to allow for our entire ecosystem to integrate with us, but at the same time, building vertical products, like you said, to make sure that we give our customers the ability to do it in the best possible way. Our products, like Enhanced Ransomware Resilience Service, are going to always use the latest and greatest of what we can offer from the ONTAP layer. Of course, we go and deliver a new set of capabilities that we want our partners to integrate with over a period of time as well.
End of the day, it's a dual job in that sense. That's part one. Second, about the breadth play, we are very focused on helping our customers with where they want us to help them. We have certain products in our portfolio that go beyond just the data available on NetApp Storage. For example, our observability solution, which is DII, helps our customers with observing and monitoring their entire stack because that makes sense over there, right? It's just not focused only on NetApp Storage. As part of the roadmap, we depend on what our customers tell us and ask us to do, and we will continue to evaluate. Again, our preference is making sure our first rule of the game is to make sure that we keep our customers' data on NetApp safe and secure.
Lou Mitsocha, Daiwa Capital Markets: Hi. Tim Long at Barclays. Two as well, if I could. First, just curious if you could talk a little bit about the solution you gave, the example of working with Cisco and Splunk and some firewall. Just talk a little bit about kind of what's the NetApp IP in this solution and what's the partner IP in the solution. Then shifting to the, you may have answered part of this, but shifting over to the hyperscale storefront model, is this kind of the first major add-on of security to that offering? Was there kind of an ask for this? Similar to other cloud offerings, was there a level of co-development with the hyperscale partners?
Shyam Nair, Chief Product Officer, NetApp: OK. Sounds good. Let's start with the first part, which is the partnership we have with Cisco, Splunk, and what we are doing there. At the end of the day, when you look at the security tooling, like what Cisco, Splunk offers, it's about ensuring that the infrastructure players like us deliver the right set of alerts into the SIEM and SOAR solutions so that the security operations operator on the other side is able to take action. Right? The IP here is basically detecting these anomalies, that there is an attack going on in real time. It is built into ONTAP, and then making sure that we deliver extremely high-quality alerts into the SIEM. Right? That's the first of our IP. Just to cover that conversation fully, ARP AI, the capability we're talking about, has been independently tested by multiple different security labs across the globe.
SE Labs, based out of London, announced us as the winner this year for enterprise data protection category and gave us a triple rating at first count, at the first testing count. It's absolutely amazing. This capability is top-notch. That's part of our IP. Of course, when we deliver these alerts into Cisco, Splunk, similarly in other similar solutions like Microsoft Sentinel and others, it's not just about the alert. It's about sending a lot more data along with the alert to make sure that the security operator on the other side is able to take action. Right? That's basically all part of that IP. Of course, the security operator on the other side can then come back and start working with the storage admins to see what needs to happen next.
As part of the latest integration that we did with Cisco, Splunk, we actually went the next step with data breach detection, which is not only do we send the alert over to Cisco, Splunk, we also now allow the security operator to block the user from going and causing further damage. Right? This capability is now built into Cisco, Splunk. That's part two of the security innovation and IP that we are sending, which is who is this actual user? Who's the user who's exfiltrating data? Right? Then working closely with the Splunk team to make sure that the security operator can block the user right away and therefore not cause further damage. That's an example of the IP that we create. Of course, we work with the entire security ecosystem wherever we can to deliver the capability that our customers demand of us, actually, at this point.
Coming to working with the hyperscalers, absolutely. All of these services, like the Enhanced Ransomware Resilience Service or the NetApp Backup and Recovery Service, which is well used by our customers, these services are SaaS-based services. The control plane is hosted in hyperscalers, and they're available through the Marketplace. We work very closely with our hyperscaler friends to make sure that these services run the most optimally. At the same time, when our customers purchase these services through Marketplace, whether it goes through the paygo model.
Chris, NetApp: Private offer model or whatever models the hyperscalers enable, you know, we take part in that. We make sure that we make it available to our customers with all the innovations that those guys are driving, a lot more from coding and pricing perspective, and ensuring that the customers can therefore utilize their existing hyperscaler commits using these services. That's a lot of the work we do with our hyperscaler marketplace teams there. Oh, and last note, but of course, these services not only run against our storage that's in customers' data centers, but also, you know, our customer that the storage that we have natively built in all three hyperscaler clouds and our CUV offering. These services basically, of course, we innovate with our hyperscaler partners there as well. Thank you.
George Kurian, CEO, NetApp: Wamsi.
Shyam Nair, Chief Product Officer, NetApp: Wamsi, Bon, back with Meta. Thanks for doing this, Gohan. Right up top, you mentioned something about designing for Quantum.
Chris, NetApp: Yeah.
Shyam Nair, Chief Product Officer, NetApp: I was just curious, you know, are you actually finding customers at this point worried about this? How true of a worry is it at the moment? Is this sort of a future-proofing? What exactly are you doing here to achieve that?
Chris, NetApp: Oh, fantastic question. Look, the threat essentially, or the risk that our customers want to mitigate today is basically, you know, harvest now and decrypt later, right? That's as simple as that, which is I have, if I can go and steal your data now, even if it's encrypted with today's algorithms, it's all right. Later, I'm going to come and decrypt it when I have quantum computing because now you can actually go and decrypt all of this data and charge a lot more, essentially. That's basically the threat. We hear about, you know, quantum computing coming up and, you know, improving. It's not viable yet, as you all know, but it's going to happen. What's happening right now, therefore, is multiple different government agencies, of course, are starting to put in new standards for computing and cryptography.
For example, AES-256 is an example of encryption technology that we just have built in now at rest, and we encourage our customers to start using that instead of a previous encryption algorithm, which is not quantum safe, right? As more and more standards come into play, our job is to ensure that our operating system ONTAP is fully capable of encrypting customers' data with that algorithm of their choice, right? Continue to basically grow into that journey. That's our job. Our job is to make sure that ONTAP is always at the pinnacle in terms of the standards, in terms of the encryption algorithms that we have to make available to our customers. Therefore, that's what we do. Our customers' job is to make sure that they utilize those encryption algorithms on their journey. To answer your other part, are customers demanding this? Absolutely, yes, right?
That's why we have all of these innovations coming in because some of the biggest customers, financial institutions specifically, and others, you know, those are the ones who are demanding that we continue to improve. We will always be, I believe, ahead of the game as the most secure storage.
George Kurian, CEO, NetApp: Thank you.
Chris, NetApp: Thank you.
George Kurian, CEO, NetApp: Frederick in the back.
Shyam Nair, Chief Product Officer, NetApp: I'm curious in terms of how important securing metadata is, and then also if we look out over the next three to five years as AI becomes more important, more integrated within the enterprise. Chris might tell me to shut up here, but I guess where do you see the future gaps that are within the NetApp portfolio in terms of securing and what you guys are maybe looking at down the road?
George Kurian, CEO, NetApp: Definitely not the future gaps, but maybe some opportunities to continue to enhance our platform.
Chris, NetApp: I believe that we've already talked about AI Data Engine.
George Kurian, CEO, NetApp: Yes.
Chris, NetApp: OK.
George Kurian, CEO, NetApp: That got announced this morning.
Chris, NetApp: Yes, we announced it earlier this morning, and we just ran a session before this about AI Data Engine. I mean, look, like you said, right, the AI journey is just starting. Customers are moving from AI pilots, enterprise AI pilots, I mean enterprise customers, towards taking these projects to completion. We all know, we've all been in the industry long enough to know that overall the AI-ready data doesn't exist. I think that's where most of our customers are. What they're trying to do right now is to start just cataloging data, which is basically starting to put the whole metadata together for all of their data so that it's easy for our data. They can make it easy for their data scientists and engineers to utilize it.
Of course, you know, metadata, you know, you can actually infer a lot from metadata about the actual data, whether it's file name, to the various attributes, who's changing it, what have you done with it, the tags, et cetera. There's a lot you can infer from metadata. The metadata store, therefore the catalog, needs to have the same level of security that the actual data has, right? You make sure that your metadata store, your catalog, has to be safe. You have to make sure that only the right people can get access to it, not just not at the level of users, like which user can access what part of the metadata to find and access the data they want, but the actual metadata store that's on NetApp Storage. We have to make sure that it's as secure or probably even more secure than the actual data is.
That's part of our job. It's not like a gap. It's something that we actually do today. What we ship today is part of the functionality. Of course, we'll keep improving as customers tell us more. For us, that metadata is actually end of the day, nothing but data, right? It's customers' data that we have to just secure as well as we secure the actual data.
George Kurian, CEO, NetApp: All right, I'll ask one last question.
Chris, NetApp: Yes.
George Kurian, CEO, NetApp: Because I think while you've been focused on cyber resiliency here because we've had some cool announcements today, I think we have a number of premium value services available today through the marketplace. This audience probably is less familiar with them. It might be good to explain what they are, how they work in customer environments, and how we deliver them.
Chris, NetApp: Oh, that's perfect. There's a lot, you know, when you talk about premium value services, we are delivering end-to-end orchestrated SaaS-based services to our customers in three big categories. All of these services are available to our customers through marketplace or a typical licensing model as well. The first one is around cyber resilience, the one I talked about. We deliver three big services: ransomware resilience service that I mentioned already with a lot of great capability that we are working towards. Number two is our unified backup and recovery service. This service is used by hundreds of our customers to go and back up their data and then recover that when need be. This service has been in existence for a few years, and it's available through marketplace. It's a very well-used service.
The third big service in the cyber resilience portfolio is our disaster recovery service, which helps our customers protect their VMware-based workloads so that they can have orchestrated disaster recovery from on-premises data centers to on-premises data centers. We are the first ones who actually have made it available such that they can do a DR of their VMware-based workloads from on-premises to AWS. You could have a workload running on-premises, and if your DR strategy says that I want to actually have my secondary site running in AWS, no problem. We actually work with our AWS team, and we were the first ISV, if you want to call it, to have made this functionality available to our customers. That's the third big piece of the puzzle in cyber resiliency, the orchestrated end-to-end services that we make available to our customers. That's the first pillar.
The second big pillar for us, of course, is AI, what we announced today with AI Data Engine. That is also going to be available to our customers through marketplace. That's the second big pillar, and we are working towards that. The third big pillar for us is what we call governance. What we have available today there is a very well-used service called DII. This service is also, again, available to our customers through marketplace so that they can use these SaaS-based services for observing their entire environment. I think to the question that was asked earlier, this service goes beyond just NetApp Storage and gives our customers the monitoring and observability capabilities for their entire environment. Three big buckets of marketplace-based SaaS services in cyber resilience, in AI, and also, last but not the least, in the field of governance.
We're starting with storage governance and infrastructure governance, and over a period of time, we'll do more. Those are the three big pillars that we have over there.
George Kurian, CEO, NetApp: All right, any final questions in the audience? All right. I appreciate your time very much today. Thank you so much for your debut voyage with us. OK, our next speaker, you guys have all heard from before, Sandeep Singh. He is the SVP of Enterprise Storage. All of your Flash, Block, and probably AI questions, we'll send him his way.
Ananda Baru, Loop Capital: All right, hello, everybody. As Chris mentioned, I'm the SVP and General Manager for Enterprise Storage. I've been part of NetApp for three years. My background is predominantly in Enterprise Storage. I was part of a startup a long time ago called 3PAR, and I led product there from pre-revenue to well over $1 billion post-acquisition by Hewlett Packard at the time. I was also part of Pure Storage for almost five years, helping Pure scale from less than $50 million in revenue to well over $1 billion. Prior to joining NetApp, I was leading product at HPE Storage. Across the board, when we speak to customers and over the last three years, I've had the tremendous opportunity to speak to hundreds and hundreds of customers globally. They have struggled with having to support just a plethora of workloads. AI is now the newest, latest, greatest addition to that.
When you think about workloads in a typical modern enterprise, you're going to find high-performance files, whether that's AI or, for example, EDA workload or media and entertainment type of workloads. You're going to find virtualization. You're going to find databases, containers. You're going to find much more of the capacity flash, more general purpose and test dev type of workloads. You're also going to find secondary workloads, whether it's backup or Cyber Vault or those types of scenarios. Customers have to struggle with how do I ultimately provide the best infrastructure to support the plethora of these workloads. Their data is also spread across on-prem and public cloud. They want that flexibility to be able to get the right balance of workloads across on-premises and cloud and be able to do that seamlessly.
When we hear about customers and their challenges, one of the critical challenges that is ever present is complexity. As soon as you double click into that, that complexity just exponentially expands with all of the infrastructure silos. That's part one. Part two associated with that is that everyone in IT has talent shortages as well as talent and skill gaps. Number three is they want to be able to seamlessly leverage the agility of public cloud and be able to have that flexibility of on-prem and cloud. Number four, what Gagan was just talking about in terms of cybersecurity, ultimately, that last line of defense becomes the storage as that last line of defense. It becomes incumbent on the IT leaders to have the most secure storage. That is a top C-suite priority across the board.
Of course, what you've heard a lot about today, everyone is looking at how do they seize an AI advantage? How does that become a game changer for them? The common thread across all of this is fundamentally data. The data fuels the overall workloads for our customers, and data is the fuel for AI. When we look at the opportunity and how we can be that strategic partner to customers, it begins with the storage infrastructure. Very quickly, it is about data. This is where fundamentally what we have done over the decades is invested in building a data platform. For customers, having that right foundation, that data platform is so critical. They need to be thoughtful and mindful of building a unified data foundation to be able to get rid of the infrastructure silos. When they have infrastructure silos, complexity abounds. There's the inconsistent management. There's inconsistent automation.
There's inconsistent data security models. The weakest link becomes the exposure window. There's inconsistent operational recovery workflows and an overall inconsistent experience across the board. The first step really becomes building that unified data foundation. This is where we have built this unified enterprise-grade data platform so that customers can collapse silos. They get consistent management, consistent automation. This way, they don't have to worry about the talent gaps and having to re-automate. They get one consistent experience for the data security model. They get consistent operational recovery workflows. They get consistent on-prem and cloud experience. That becomes step one. What we've also done is we have a fully refreshed, comprehensive, industry-leading end-to-end overall portfolio of our data storage products.
It spans the high-performance flash and capacity flash and hybrid flash so that customers can leverage that no matter what the use case, what the price point, what the performance levels, and be able to leverage it for the breadth of these application workloads that are about empowering their internal innovators. We are a top leader in flash across the board. Our portfolio is also fully interoperable. What that means is they're collapsing silos. They're also seamlessly able to get the lowest cost of data over the lifecycle. In the data world, there's such a thing as hot application data and then cold data. We give customers that complete flexibility with this automated granular tiering built in, where that cold data can be automatically tiered on-prem to on-prem as well as on-prem to cloud as well.
With everything that Gagan was just talking about, we have the data management tied into the application workloads. This is where, whether you're running virtualization or database applications and you want to get application-consistent backup copies and maintain those library point-in-time copies for recovery, that is application-consistent so you can sleep better at night, that is built in. That's through our Snap Center. Customers love that capability across the board. We also invest in full integration into the top workloads so that customers, including the administrators at the workload level, are just able to seamlessly consume the underlying infrastructure end to end. With our announcements today on AI and how we are unlocking the value with the combination of NetApp AFX, it's enterprise-grade disaggregated storage that just delivers extreme performance, massive linear scale. It is NVIDIA DGX SuperPOD certified, including with DGX GB300.
That enables customers for deploying their AI factories built on NetApp AFX. The AI Data Engine that pairs with AFX enables customers to be able to deploy a full AI data pipeline that is secure and that is efficient. It comes with the integrated data discovery, data curation, data guardrails, as well as the full vectorization and the vector embeddings for GenAI applications, all built in. It makes it super simple for customers to be able to go and build an end-to-end AI data pipeline. We have also done is we full well recognize that enterprises are going to have AI at different levels of maturity within their organizations. Some are in POC stages. Some will be in deployment stages. We're simplifying this end to end. All of this value of NetApp AFX combined with NetApp AI Data Engine, we're also making this available as a service with NetApp Keystone.
Whether the customers are in POC stage or production deployment stage, they get that complete flexibility of being able to adopt all of the NetApp enterprise AI value that we're unveiling. They get to do that as a service and then be able to scale seamlessly as their AI initiatives grow. With that, I will open it up.
George Kurian, CEO, NetApp: Right, Ananda in the back.
Hey, thanks. Yeah, Ananda Brewer with Loop Capital. Thanks for that. That was a lot of great detail. As folks begin to deploy AI applications or features, AI features inside of existing applications, moving proof of concepts into production, and actually, if they even have to do this for proof of concepts, let us know, do you see them doing incremental spend, storage spend along that journey for those AI applications? Do you see them phasing in the AI applications and making purchasing decisions along refresh cycle lines? I have a quick follow-up too. Thanks.
Ananda Baru, Loop Capital: In terms of the AI spend, it's fast evolving. The overall AI technology is fast evolving. Organizations on how they are deploying it is also fast evolving. A lot of the enterprises are forming AI centers of excellence where they will formalize the best practices. They will also have shared infrastructure as part of that center of excellence. For some, it's a matter of adding AI. For others, it's a matter of building out net new AI initiatives. What our vision is and what we are enabling for our customers is that AI should not be another silo because silos continue to propagate the complexity. Customers need not only the performance and scale for AI, they need all of the enterprise-grade capabilities. They need all of that flexibility of hybrid and multi-cloud. Of course, security has to be just built in.
What we've enabled customers to do is basically be able to leverage it and be able to leverage it as just another workload along with everything else there. They may start as part of building out dedicated infrastructure. Very quickly, it becomes part of the overall infrastructure.
Thanks. Just as a quick follow-up, you actually began to touch on it. The point about increased complexity, I just want to ask, and if the answer is no, please say no, because I don't. Is there anything about AI in the complexity conversation that pushes the organizational data management paradigm over any sort of tipping point such that the addition of the AI to the paradigm almost necessitates something like simplification? Just because you're here, I thought to ask the question, but I don't want to lead the witness, and that's it. Thanks.
Look, in the AI world, first of all, you have to recognize that within an organization, you have multiple different personas that are part of that journey. You've got the data engineer. You've got the data scientists. You've got the AI developers. You have the IT teams that are beginning to be part of that conversation. Clearly, there's AI frameworks and tools and infrastructure that is even outside the enterprise. Obviously, there is a ton in the public cloud there. You now have newer AI factories that are emerging as well. Ultimately, for customers, what they have to think about is firstly, how do I actually get the AI, the AI ready? That's really that first step of the journey because data becomes the fuel for AI. For enterprises, unlike the consumer AI, for enterprises, really, AI needs to be informed with the context of their data.
That's kind of the important first step for getting their data AI ready. The next question really becomes in terms of how do I get my data from where it is to where the GPUs live and be able to do that while preserving all of the security permissions without propagating a plethora of copies? As soon as you make copies, you lose the data lineage. You've also lost the context of security there. We have technologies, for example, our FlexCache technology and/or our SnapMirror technology that enables customers to just seamlessly be able to make their data accessible to AI and do that in the context of preserving all of their security permissions without making copies there.
When you think about the overall data pipeline that customers ultimately at the data scientist level are having to go and stitch together, that becomes essentially this notion of there are multiple fragmented tool sets. Along those tools, there are multiple copies that are being generated. Often, we hear about this challenge that I articulate as data bloat where customers are complaining about my data is multiplying 10x or 20x, especially during that overall vectorization process. What that means fundamentally is if that problem isn't solved for them, it becomes incredibly costly for them to go and deploy AI at scale. What we have done is simplified this end-to-end AI data pipeline with that AI Data Engine. We have built in our own technology to go and build a super efficient overall vector embeddings to help customers avoid the data bloat challenge.
We've also partnered closely with NVIDIA in integrating in their overall NIMS technology into this AI Data Engine. One last step, we're also investing in the integration with our public cloud partners so that the customer's data, not only can we make it seamlessly accessible into the cloud, but we can seamlessly integrate it and stitch it in to all of the AI frameworks and tools that are being invested in the public cloud. That's how we're looking at this in terms of just simplifying this end to end. You also heard George on the keynote stage talk about ultimately this whole notion of metadata fabric and a knowledge graph because when you fast forward AI, ultimately, when you think about enterprise data today, you've got basically a lot more of the LLM-powered use cases. Tomorrow, the evolution is taking us to overall agentic AI.
This whole notion of an enterprise's data set curated, classified, protected, and then made accessible through a knowledge graph becomes so critical for customers to then truly unlocking the power of agentic AI.
George Kurian, CEO, NetApp: All right, Lou. Sorry. Steve.
Shyam Nair, Chief Product Officer, NetApp: OK, thank you. Lou Miscioscia, Daiwa Capital Markets. I haven't heard the words 3PAR or David Scott in quite a while. There we go. Seriously, though, you seem like you've been with some great storage companies. You have probably insight that many others might not. What could NetApp do better, given obviously we always hear about the strengths of the uniform operating system? What could NetApp do better in the sense of why isn't it maybe not gaining share fast enough in comparison to the other competitors, being HPE, Pure, or some of the other players out there?
Ananda Baru, Loop Capital: Look, first of all, it starts with building a unified data foundation. That experience is unmatched, bar none, in the industry. ONTAP is a gift that keeps giving. ONTAP has been matured over decades. The level of power of unifying application workloads and serving that with a unified data plane, coupled with a unified control plane, is an unmatched experience across the industry. No one is able to deliver that. When others talk about unifying, they still have infrastructure silos. That infrastructure silo means you might get some value for a given application workload, but you're still siloed within the boundaries of that given application workload. As you go from either block to file to object, you end up segregated. That complete flexibility of a unified data foundation is step number one.
Step number two, nobody in the industry has had the foresight or the level of integration that NetApp has done with our hyperscaler partners. This is giving customers this flexibility of, I can right size. I can shift the workloads without having to go and re-architect their application workloads. That's amazing for our customers and having that complete flexibility. Thirdly, when you think forward-looking, even present now, top of mind is cybersecurity and security being built in, right? No one is able to match those game-changing technologies that Gagan was describing. It begins with that real-time ransomware detection where we can detect a ransomware attack within seconds to minutes, unlike otherwise where it would typically happen in backup or secondary workloads, which is hours to days later. This means basically a very little amount of data is impacted before that attack is detected.
Create these tamper-proof snapshots for rapid recovery, right? Everything that we were just talking about, AI. When you think about basically, we have a fully refreshed portfolio. We have a comprehensive unified enterprise-grade data platform. We're evolving AI from another silo into just another application workload and giving customers that complete flexibility of not only just performance and scale, right? You've heard a lot from others in the industry about performance and scale. This is about delivering AI with performance and scale, with enterprise-grade capabilities, with overall the most secure storage, and having that full hybrid multi-cloud.
Shyam Nair, Chief Product Officer, NetApp: A little bit of a follow-up. If you want, you could brag a little bit. Which competitors would you think are the easiest ones to compete with? Which are the ones that are a little more difficult? Thank you.
Ananda Baru, Loop Capital: Look, I won't go into specific competitors here, right? What fundamentally it comes down to, the IT leaders, once they have this recognition of I don't need to just refresh. I need to modernize because I need to be cyber resilient. I need to have my data AI ready. I need to be able to enable the outcomes that are being demanded by internal customers across the board. As soon as that realization happens, very quickly, it comes back down to what is that right data foundation? We're right there for customers to be able to help them see and showcase how we can be that strategic partners to customers to overall modernize their end-to-end infrastructure. We look for how do we solve the customer's burning pain points and how we can address them.
As long as we are doing them incredibly well in a differentiated manner, that is what we're looking for.
George Kurian, CEO, NetApp: All right, we will get to you, Matty.
Shyam Nair, Chief Product Officer, NetApp: Thanks. Steve Fox with Fox Advisors. I guess there's been a lot of talk from the company in the last few quarters about just having enterprises doing a lot of testing on new workloads, et cetera. You've laid out a path for how these workloads could be monetized a lot better. I'm trying to understand the bottleneck here. When is NetApp going to start winning? Is it one workload at an enterprise customer, five? Is it that they see that they try to pipeline it and they can't? How is this going to play out so that you guys are ultimately successful?
Ananda Baru, Loop Capital: Look, I would say overall, we are a top leader in flash. You've seen our continued growth in overall flash storage. That's just an overall trend as customers are continuing to modernize. Across the board, when we look at the deployments that customers have on NetApp, you will find virtualization, database workloads, high-performance file workloads, secondary workloads. Those are so prevalent across our customer base. Our customer base spans all the way from the topmost strategic, the largest of the largest enterprises, enterprises overall, as well as a lot of the corporate and commercial accounts. We span the gamut across the geos. We see the overall customers continuing to go and consolidate and unify their application workloads. That's not only across on-prem. It's across on-prem and cloud.
Shyam Nair, Chief Product Officer, NetApp: I think your question is like how do we get into a customer? Like is it a single workload? There's a specific pain point that we address and then expand from there?
Ananda Baru, Loop Capital: Yeah, look from that perspective, there's multiple ways of addressing and how we land in a customer account. When you think about the journey for getting the data AI ready and accelerating their AI initiatives, it begins with helping customers build a unified data foundation. The way they take advantage of that can be by landing a file workload, landing a block workload, or landing an object workload, many of those application workloads that I talked about. That begins that journey for them. It evolves into customers then seamlessly extending to cloud or extending on-prem to as a service with our Keystone Storage as a Service offering. It evolves into essentially becoming cyber resilient, all of the capabilities that are built in, as well as the ransomware resilience service that Gagan was talking about. The final step turns into getting their data AI ready.
It's not necessarily a sequential journey, but these are different paths ultimately for onboarding and onto the NetApp data platform.
Shyam Nair, Chief Product Officer, NetApp: OK, maybe I'll finish this one. If I just, as a follow-up to that and rephrasing the question, you're working on a unified data lake, unified data enterprise-grade storage, unified data foundation, unified data lake. There's a lot of stuff that you're doing. You're working with customers. The fact that FI25 was a strong year for all-flash array gives you a tough comparison. Perhaps all of these unification and new approaches and problem solvings would manifest itself to some traction in FI27 without asking you a specific financial question. We've been hearing of all of these problem solving. We're trying to figure out how this puzzle is coming together. It seems to me that it was more of a 27. We're in the sixth inning.
Ananda Baru, Loop Capital: Look, I won't comment on the financial side of it. Wissam is here. Chris is here. They are much better equipped than I. What I can say is that we are laser-focused on making sure, one, we fundamentally understand what are the burning pain points for our customers across the board. Two, that we have and continue to enhance, but we have the best differentiated unified enterprise-grade data platform as that foundation for our customers. Thirdly, just giving them all of the necessary capabilities, whether it's as a service offering on-prem or in the cloud, or it is the necessary software capabilities or workload-tied capabilities to not only just accelerate and continue to simplify their existing workloads, but ultimately go.
Shyam Nair, Chief Product Officer, NetApp: I don't think competitors are doing any, it's not like competitors are ahead of you. We're all in it together. I think if I just like trying to think about what has happened over the past 12 months and all the efforts you put in, we're in the sixth or maybe seventh inning in that journey, all the good things that you have done. It's not like we're still in a dark room trying to figure out where the door to the AI stardom is. We're getting close. Would you agree with that assessment?
Ananda Baru, Loop Capital: I would say, look, I don't know about the innings, but I would say, look, the AI journey, especially enterprise AI, that journey is just getting started, right? We see forward-looking just tremendous opportunity of working with customers on their enterprise AI journey. We're super excited about the innovations that we're bringing to market on that front.
Shyam Nair, Chief Product Officer, NetApp: Thanks.
George Kurian, CEO, NetApp: All right, we have one final question from Samik.
Shyam Nair, Chief Product Officer, NetApp: Hi, Sandeep. Samik from Jake Mongan. In terms of the conversations that you're having with customers relative to AI, how much of a credit do you get if you are the installed supplier already? Is it like a fresh bake-off between all the vendors on a feature by feature? Do you get a credit for being the install base? Just trying to figure out is having a large install base a benefit in terms of when that bake-off happens. Given the AFX product just launched, your experience with AI and sort of those conversations with customers, how much of a sort of timeline do you think AFX takes in terms of customer education and adoption? How do you think about that?
Ananda Baru, Loop Capital: Yeah, look, overall, when we think about AI, first of all, this is just a fast-evolving space. The technology is evolving. The customer use cases continue to evolve. What you used to hear a lot about in terms of the use case was that model training use case. When you look at the enterprises, they very quickly realized, first of all, I can't spend hundreds of millions of dollars to go train the models. Secondly, with the use cases that are shifting to inferencing and with the emergence of overall reasoning language models and test time compute scaling, the predominant use case in the enterprise will become a lot more about the inferencing use case overall. When we're speaking with customers, we have, what, over 100 exabytes plus worth of overall customer data globally that we store.
That gives us a tremendous asset across the board because when you think about the challenges of when you have your data and if you're fundamentally making another copy and then continue to multiply it, not only are you getting this data bloat challenge, all of the security context is also being lost in all of those transformations. This is where we see a tremendous opportunity. We've had a number of customer conversations who have gone down the path with some upstarts where they've gone and looked at the performance and scale because that's all they had.
They've been asking us where they need help is not just the performance and scale, but it needs to come along with having all the enterprise-grade capabilities, having all of the hybrid cloud workflows because AI is inherently a hybrid workflow end to end for them, and then having all of the security built in. That is where we see a tremendous opportunity where we can help customers end the silos. We can end the compromises for them. They need to end those compromises in order to go deploy AI in production at scale for themselves.
George Kurian, CEO, NetApp: All right, thank you very much, Sandeep.
Ananda Baru, Loop Capital: Thank you.
George Kurian, CEO, NetApp: All right, thanks again, Sandeep. Really appreciate it. Now we have Pravjit Tuana. He is the SVP of Cloud Storage and Services. Lots of exciting announcements there today and just ongoing interesting and great part of the business. With that, I'll hand it over.
Lou Mitsocha, Daiwa Capital Markets: Hi, everyone. As Christine introduced, I run Cloud Storage as well as open source technology stack at NetApp. When I say open source stack, I am talking about our InstaCluster offerings. I can give a little bit of two minutes brief into all the portfolio we have and the kind of announcements which we are doing this week. In a nutshell, our Cloud Storage and Services portfolio includes three things. The very first is our first-party Cloud Storage offering. First-party Cloud Storage offering is where we are natively integrated into all the hyperscalers, all the three major hyperscalers. It's not like we have bolted one or something. This is co-development, co-engineered capabilities which we provide to our customers. We are integrated, the engineering stack as well as everything from billing to GTM to all those kinds of capabilities.
It provides unique differentiations which are not otherwise possible if you are just bolting it as simply just a marketplace offering or something. The second aspect of our portfolio is what we call as Cloud Volume ONTAP. That is basically a Swiss Army knife of all the ONTAP capabilities which we provide in all three hyperscaler services. First-party Cloud Storage is all about fully managed, no ops. We manage it on behalf of the customer. CVO gives capability to customers where they can fine-tune every single dial for their install of ONTAP in cloud in all three hyperscalers. It's very commonly used for extension of your hybrid storage and those kinds of capabilities. The third aspect of it is what we have in our open source offerings.
We provide managed Kafka to Postgres, to Apache Cassandra, to ClickHouse, to OpenSearch, all those as a managed offering for all the open source stack. The idea there being is it provides you a true open stack to begin with. It also enables you a true multi-cloud. You can move your workload if you're running on open source stack from one hyperscaler to on-prem or to another hyperscaler, all those kinds of things. Those are the three main building blocks. A lot of things go under them. A lot of capabilities get built on them. As far as focus is concerned, our focus recently has been on a few aspects. The first and foremost is to bring AI to the data. We are the only cloud storage vendor out there who are natively integrated into all the hyperscalers' AI and analytics stack, right?
If you see on AWS side, we are integrated with Bedrock, Q, and all those capabilities. You don't have to move the data out to some S3 or anything or any object store. You can run your AI stack right then and there, same way. In Azure, we are connected with their stack around all the Azure AI Search, AI Studio, analytics, and so on. This week you saw Google announcing Gemini Enterprise. Same way, we are integrated into Gemini Enterprise side of the things also. The second focus area is to bring all the ONTAP richness, which we have built over the last 30 years, be it in performance, security, cost optimizations, all those kinds of things over to cloud. If you see, we have built tons and tons of capabilities this year across all three hyperscalers to bring that richness to our cloud offerings.
The third is, in the end, customers buy us for workloads, which are basically our way of saying that, hey, those are the outcomes for which they buy our products. We are not just only focused on the AI as a workload, but especially we have grown by leaps and bounds in EDA, HPC, SAP, databases, VMware. Those are some workloads where we have built rich capabilities so that there is no aspect of it which customer is missing. Finally, we are also talking about making how to make our offerings more good for, like especially for developers. This year, this week we also announced Visual Studio Code extensions. Now you can use basically chat kind of interface to do all the things which you do in hyperscaler, right?
You can say, hey, provision my storage or delete my volume, all those kinds of capabilities just as a chat interface right from the IDE itself. In fullness of time, we plan to extend into other IDEs. We started with Visual Studio. Before taking questions, I'll just talk about a few numbers. Our cloud storage has been growing by almost 50% year over year. We are now 2.2 exabytes of storage. In our open source InstaCluster offering, we do more than 22,000 to 21,000 managed units now on behalf of our customers. We have over 5,000 paying enterprise customers and growing at a very substantial rate year over year. One good thing about it is, right, like our offerings in cloud, they're not just like only the ONTAP customers from on-prem who are migrating over too.
Yes, there is a portion of that, but significantly, almost 2/3 of them are the new customers who are starting to use NetApp for the first time. I'm happy to take any questions about our capabilities, numbers, where we are heading, AI, anything.
George Kurian, CEO, NetApp: Maybe not numbers.
Lou Mitsocha, Daiwa Capital Markets: No, the usage numbers, not the financial numbers. Usage numbers I can talk about.
George Kurian, CEO, NetApp: All right, questions? Everyone's tired. OK, Sameek.
Oh, sorry.
Shyam Nair, Chief Product Officer, NetApp: Maybe it's slightly numbers-oriented, but I'll sort of frame it this way. The first-party storage services grow really at a, as you said, like 50%+ or 30, 40%. The numbers are really strong. The rest of the business, which tends to have a lower growth rate on it, seem like overall from our perspective looking in, the services that you provide outside of first-party storage seem to have a lower attach in terms of what enterprises are adopting on the public cloud. Maybe just get into sort of the details there in terms of why outside of the first-party storage, there seems to be a lower growth rate for the other businesses. Is it something that needs to be addressed over time? Thanks.
Lou Mitsocha, Daiwa Capital Markets: You are talking about our cloud businesses like CBO, or are you comparing it with our on-prem businesses?
George Kurian, CEO, NetApp: Cloud business. DII and some of the other businesses.
Lou Mitsocha, Daiwa Capital Markets: Yeah. I don't have the precise numbers top of my mind for DII. On InstaCluster, we are seeing this almost similar kind of growth, which we are seeing in our first-party cloud storage. The growth depends upon the hyperscaler, right? The growth for our CBO product, which is like self-managed ONTAP, is also in similar lines. They might not be exactly % to % match. The growth rates are pretty substantial, which are way higher than the industry.
George Kurian, CEO, NetApp: I'll just jump in and get you off the hot seat. Don't forget, we have a lot of services that we end of life, demonetized, and we began that about a year and a half ago. There are some headwinds that you're seeing there, almost through it though. By the time we lap the Spot divestiture, I think you'll see a cleaner cloud number on the report. All right, Tim.
Shyam Nair, Chief Product Officer, NetApp: Hi, Tim Loggett, Barclays. Thanks for this. Two, if I could. First, you know a few of the offerings kind of filled out where now you have every offering to Google Cloud, Azure, AWS. Could you just talk about filling in those holes? How meaningful do you think that would be to usage? The second, I'm just curious, with this, you talked about 2/3 of the new customers are new to NetApp. What is that motion like? Obviously, you're getting help from the hyperscalers pulling them in. What's kind of their decision tree that they go through? It's probably a lot simpler decision if they're ONTAP on-prem. How is that whole decision set a little bit different? Thank you.
Lou Mitsocha, Daiwa Capital Markets: No, thank you. I think let's start with your first question around, right, like filling the capabilities gaps or completing our metrics in cloud, right? What we have heard a lot from our customers is especially around workload consolidation, right? The unified block and file offering which we provide, which is available in on-prem, but now also available in our cloud services, that is one of the unique differentiators why customers start using our capabilities. Because now, earlier, if you remember, we started our journey with file storage in our three hyperscalers. Now we have brought file plus block. Customers want to use their workloads in a unified way, right? They don't want to use one vendor for something, another for another. It simplifies things, right?
I think the other aspect is, right, like if you see a 30-year history of NetApp, right, like we have built an array of data management capabilities, right? Things like Snapshot, SnapMirror, all those. Those capabilities are now becoming really, really useful for our cloud customers also, right? If you are saying Amazon FSx for NetApp ONTAP, right, like you want to do a multi-AZ or multi-region kind of a setup, right? You can then use it with SnapMirror and set it up, those kinds. The second aspect of it is, right, like filling the gap aspects on all things around getting the rich data management capabilities, which we have. The third aspect is, right, like if you see over the years, we have built a lot of capabilities, especially in our price and performance optimization, right? Deduplication, compaction, compression, all those.
We have brought all those capabilities into the cloud also, right? I was looking into the numbers, right? If you're using our block storage, say, in the cloud, you get almost 4 to 5x data efficiency because of the capabilities which we have built over the year, which brings a unique differentiation, right? The performance work, which we have done around, be it around very high IOPS and so on, right? That also is now available in the cloud, right? The more important thing is, right, like if you're a customer who are using NetApp or anything on-prem, right? If you even move to any of our cloud offerings, right, be it AWS, Google Cloud, or Microsoft Azure, you don't have to refactor your applications, right? You don't have to rewrite those, right? All those things fill in the gaps and provide a much differentiated offering than others.
There was a second part of your question.
Shyam Nair, Chief Product Officer, NetApp: Yeah, just the decision tree for the new NetApp customers on cloud services.
Lou Mitsocha, Daiwa Capital Markets: Yes. We also have been understanding and learning this thing over the years as the services have been growing, right? One thing is becoming a little clear to us, right? Like, hey, customers don't choose based on just on like, hey, what is the logo of the vendor who is providing it, right? They look deeply into, right, like what problems of theirs are being solved, right? From that perspective, like as I was saying, tens of years of data management capabilities which we have built, they really resonate with our customers, right? Same way, all the things which we have done around ONTAP innovation over the years and bringing them to cloud, that also resonates with our customer. Unified is one of the most differentiating aspects which we have with file.
Whenever either an app developer or IT admin or those people are looking into that, they always look into those capabilities to figure out, right? The fact that we are natively available inside console, the SDKs, the APIs, the CLIs, all those aspects of our hyperscalers, it becomes easy to find and discover also. Once you start building an app or whatever you are building, you find the richness of our capabilities. That attracts them to start using us more and more. Yeah, roughly it's 55% of our new logos are new to NetApp in cloud.
George Kurian, CEO, NetApp: All right, more questions?
Lou Mitsocha, Daiwa Capital Markets: We also announced this week a few of the capabilities I can talk about, from GCNB Block to our data migration capabilities, and also all the AI integrations which we have now with each hyperscaler. These are unique and differentiated from all aspects.
George Kurian, CEO, NetApp: It would be great to cover those and really talk about why does that matter to customers?
Lou Mitsocha, Daiwa Capital Markets: Yeah, I think the so let's talk about the AI part first, right? Like why we took this unique approach is because customers were telling all the time that the biggest problem they are having with building their AI flows is, right, when they have to copy the data over to multiple places, right? It breaks the whole security, cost, all those aspects, right? From that perspective, that's why we went ahead and did native integration with hyperscaler AI stacks, right? We had these capabilities like SnapMirror and FlexCache available in on-prem, now also available across all three hyperscalers, which make it really, really easy for our customers. Same way on the InstaCluster side of the things, right? Our customers told us that, hey, they want a real alternative in open source world for the complete Gen AI infrastructure, right?
If you see in our InstaCluster offering, we have capabilities like from Postgres-based VectorDB to complete OpenSearch to soon to be coming MCP Gateway, right? The idea being that, hey, you don't have to stitch all those pieces together. You can orchestrate it from a one layer. You can move it between on-prem and cloud, right? That capability of like, hey, you have a real multi-cloud play. You don't have a vendor lock-in, right? It is open. It's cost optimized. All those things basically are exciting for our customers, right? That's on the AI side of the things. On the block side, as I was saying, right, like the biggest differentiator has been the unified, the data management capabilities, price performance optimizations, and familiarity with using our stack on-prem and bringing into the cloud. We also announced this week a capability called Data Migrator.
What we have seen is that, hey, even if we have a beautiful castle in our hyperscaler, we still need a freeway to get people to that castle. That's why we built a NetApp Data Migrator where you can pick up any of the NFS or SMB file shares and basically can get your data into the cloud. We are the only vendor who are providing it without any cost and with a high consistency in terms of checksums and those kinds of capabilities.
George Kurian, CEO, NetApp: All right, Ananda.
Gagan Gulati, SVP and GM, Data Services Group, NetApp: Hey, thanks. Thanks for doing this. Yeah, Ananda Barou with Loop Capital. Do you see any potential for an AI, sort of an on-prem AI catalyst for any aspect of the cloud business? As a specific, for instance, as corporate customers begin to look to do more, say, model training on the cloud before they pull it back on-prem to go live in production, something like that, I guess, would be a for instance. That or anything like that as a potential cloud catalyst that we all might see show up in the business.
Lou Mitsocha, Daiwa Capital Markets: To be honest, AI is not possible without cloud, right? AI is also one of the truly hybrid workflows out there. It's hybrid. It's multi-cloud. We are seeing a lot of these patterns. We are seeing patterns where somebody just uses only in cloud. They will either hook up to their Amazon FSx for NetApp ONTAP to, say, Bedrock or Q or something, or similar things in Microsoft Azure and Google Cloud. There are those who start their training in enterprise, and then they take the whole next set from inferencing and all those, and they take it to the cloud. There are some who are in between. I think the journey is still early for many enterprises to have production-grade AI applications. We are seeing all flavors of them, and that's where we are uniquely shining.
Shyam Nair, Chief Product Officer, NetApp: I have a quick follow-up.
Lou Mitsocha, Daiwa Capital Markets: Yeah, sure.
Shyam Nair, Chief Product Officer, NetApp: I just thought.
George Kurian, CEO, NetApp: All right, say it to the mic.
Shyam Nair, Chief Product Officer, NetApp: Yeah, yeah.
Gagan Gulati, SVP and GM, Data Services Group, NetApp: Thanks. Quick follow-up on this. Thanks for that. That was helpful. I think Sandeep talked about NeoCloud opportunity or something beginning to pop up in the NeoClouds. I guess the question is, as we're seeing more of your hyperscale partners move workloads into the NeoClouds, and it seems like it's happening at scale. It's going to happen at a bigger scale, it seems like, in the coming years. Do you have opportunity there with those AI workloads that go into the NeoClouds from the hyperscalers?
Lou Mitsocha, Daiwa Capital Markets: No, that's a good question, right? There are two aspects there. One is the NeoCloud, and another is the sovereign clouds, right? Let me quickly answer the sovereign one because that's the easy one to say because even by analyst estimates and all, 70% or more of the sovereign workloads are running in the cloud, right? We are good there because we are available in all 130+ regions. There is no actual storage vendor or any vendor who is available in that many regions, right? Not even hyperscalers because we are a sum total of all three hyperscalers, right? We are also available in all GovClouds, the European sovereign region, all those top secret clouds, all those places we are. For our sovereign, we are very, very well covered.
When it comes to NeoCloud, one of the unique things which we have built over the years is we have our hardware-based offerings inside hyperscalers. We also have software-defined storage layers, like what we have done with AWS, as well as what we are doing with Google and eventually with Microsoft also. Those are the capabilities which are very capable to be applied to any kind of cloud, be it NeoCloud or hyperscaler cloud. Where we will bring the unique differentiation is because we already have first-party storage services inside the hyperscalers. We can federate a lot of those workloads to work between NeoClouds and the hyperscaler clouds. We are looking into that, as Sandeep said. We are looking and evaluating all those opportunities and seeing what customers really want. We don't want to build something because it's cool to build that.
We really want to work backward from what customers are asking and work with customers to build those kinds of capabilities.
Shyam Nair, Chief Product Officer, NetApp: Thank you, Wissam Jabre, Bank of America. I guess a couple of quick ones. One is, how do you decide around investing for the cloud opportunity in the sense that you obviously started at Microsoft Azure? You all mentioned the hardware component over there. It sounded like maybe you will have a software-only component over there as well. A, just in terms of where you are today, do you need to invest more in some of the other hyperscalers away from Microsoft Azure or not? B, as you look at what customers are using NetApp for in the cloud, do you see some use cases which are favoring one hyperscaler versus another from that end?
Lou Mitsocha, Daiwa Capital Markets: Yeah, I think I will clarify one thing about our, even like when we deployed hardware into the hyperscalers. The control plane aspect of ours was jointly co-engineered between us and hyperscalers. It's not just like, right, like that they are using it just as a hardware array, but it's a full-blown control plane which we have built. That's where most of our IP and like making it cloud agnostic and all those kinds of things have gone, right? Yes, means. There are certain use cases where software-defined, like if you are running really, really intensive workloads, right, like our hardware. Solutions
Chris, NetApp: which are embedded inside the hyperscaler works fine. There are a lot of, like, right, like Kubernetes kind of and those kind of workloads where it just naturally makes sense to have a software-defined kind of a storage offering. Those customer asks are basically defining why we are taking both the approach. For customer, we will make it seamless based on their workload, right? They don't have to do this whole math behind, right? Like, hey, this versus that. We want to basically map your workload to the right storage solution for you, right? Do that hard work for you so that it is price, performance, security, efficiency, all those things they get out of it. That is how we are making these decisions of, right? Like, hey, how do we grow this thing? I think your second question was, right?
Like, hey, are you seeing some unique patterns in one hyperscaler versus other? In general, right? The growth in certain workloads is very consistent across hyperscalers, especially EDH, PC kind of workloads, or SAP, or databases, or even virtualized environments, right? Those we see very consistently across. There is, like, right, like a little bit of, like, AI and persona of customers is a little different in each cloud, right? Azure is more enterprise-centric. AWS has both, right? Like, a lot of startup ecosystems also, right? Google is probably somewhere in between. We do see that. On AI side, right? Especially what we are doing with the Gemini Enterprise or what we are doing with Bedrock and Q or what we are doing with the Azure AN. There are a lot of subtleties which have started to come here. Customers often use that. That's why I was saying before, right?
AI is probably the most multi-cloud and hybrid workload out there. In fullness of time, we do expect, right? Like, customers will use all of these clouds based on what AI problem they are trying to solve. That way, our integrations are working out very well for them there.
George Kurian, CEO, NetApp: Any more questions? All right, Steve.
Shyam Nair, Chief Product Officer, NetApp: Thanks. Just a quick one. You mentioned how there's net new customers to NetApp. What happens to those customers in terms of them expanding across, you know, the offering beyond cloud? How do you grow those customers once you have them on a cloud services?
Chris, NetApp: Yeah. I think there are multiple ways to look into this thing, right? We have built a lot of value-added services which they start using over time, right? They probably start with just provisioning a volume, but in fullness of time, they use our security product offerings. We enter end-to-end software protection, which is also available in all the three clouds of ours, or other value-added services like backup as a service or disaster recovery as a service and so on. That's one dimension of that, how they start using our products and keep on growing. Now what Sandeep was talking about, AI Data Engine and all, right? I don't know if Sandeep spoke or Gagan spoke about it, but one of them must have spoken about it. Those kind of capabilities from cataloging to vectorization to all those meta engine kind of things were also from our ecosystem.
That way it works out. We have also seen other way also, right? Some customers started using us in cloud and then we met their on-prem requirements also. We do see that cross-flow between both of them. That way, once the customer gets the value of using our services, they start using it in many different dimensions and aspects of it. We also continue to learn from them what new capabilities to build and work backward from that.
George Kurian, CEO, NetApp: All right. I have one question that I get a lot, so I'll ask it of you. How do customers choose to use NetApp in the cloud, right? There's so many storage offerings. How does NetApp become the decision choice?
Chris, NetApp: Yeah. There is not a single answer for this thing because it depends upon many customers. Some customers are very familiar with NetApp because, remember, we have been 30 years. Some of them are just doing, they know us from on-prem, and when they start their cloud journey, they start using it. The good thing is, 90% to 95% of organizations out there today are using cloud in some fashion, right? Overall, it's a pretty big market, right? That's one which they do. The second is, we are natively integrated into hyperscalers, consoles, SDKs, APIs, and so on, right? Doing a POC, doing a discovery is a really, really frictionless experience for them. It is integrated into their billing and metering and all those kinds of capabilities from the hyperscaler itself. They don't have to redo the whole thing, right? That's one aspect, another aspect of it.
Third is, what makes first-party cloud storage also uniquely differentiated is, we use the hyperscaler GTM motion. It is not different, right? These are hyperscalers' offerings, right? They GTM it in that manner. We go through their whole sales, marketing, all those kinds of capabilities jointly, right? There are multiple avenues. Recently, we have started focusing a lot on developer persona also, right? Developers are finding us inside Visual Studio Code marketplaces and those kinds of things. There are multiple places where the journey starts.
George Kurian, CEO, NetApp: All right. I'll give the audience one final chance. Nope. All right. Thank you very much.
Chris, NetApp: All right. Thank you, everyone.
George Kurian, CEO, NetApp: Hey, can I get that piece of paper for me? All right. Now's the session I've been waiting for the most because this is the coolest customer panel that we've ever had. You'll recognize the organizations that are about to come up on stage and join me. With that, I'll ask Aston Martin F1, NFL, and San Francisco 49ers in Levi’s Stadium to come on up. Have a seat. Make yourselves comfortable. Thank you guys so much for being here. I think since everyone knows what your organizations do, but no one probably knows who you are or what role IT plays in your organizations, maybe that's a good place to start. I'll hand it over to you, Aaron.
Ananda Baru, Loop Capital: Sure. Aaron Amendolia. I'm the Deputy CIO at the NFL. My direct responsibilities include our infrastructure cloud and on-prem, our innovation hub where we kind of incubate and try out new technology with R&D, either for the game or for the business itself. I run our events technology, Super Bowl draft, the international games that we have, and a bit of our strategy and finance planning around technology. IT is an important partner within the league. We're there to help both the goals of the game itself as well as run the business. We're a regular business with the same departments that every other business has and licensing agreements and contracts and things that every other business does. All need technology.
I'm Fabrizio. I'm the CIO of Aston Martin F1. Similar story. I'm in the, meanwhile, the prominence of IT in terms of infrastructure, software development, AI, storage, especially in the engineering world, like Formula 1, is extremely prominent. We got in a couple of days, we are in Austin. We go to Mexico. It also, there is a huge element of international network and data storage and events to manage. We got the huge capability that we use from NetApp as one of our key partners and helps a lot over the last years to improve the engineering part and how we manage trucks at events. Obviously, there are typical IT topics which are more or less the same. Cybersecurity is one of them. License management, HPC, calling system, that stuff that more or less everybody is dealing with.
Lou Mitsocha, Daiwa Capital Markets: I'm Costa Cladianos. I'm the EVP of Technology for the San Francisco 49ers in Levi’s Stadium. Our teams basically oversee all the technology components of the stadium, of the team, and the events around it. We will be working very closely. We already work closely with the NFL, especially this year as we host the Super Bowl, a little small event that we're going to host in the valley there.
Ananda Baru, Loop Capital: All the pressure's on Costa. He has to keep the lights on.
Lou Mitsocha, Daiwa Capital Markets: Exactly. I mean, being with the San Francisco 49ers, there's a little extra spotlight on us because we are in Silicon Valley. We really pride ourselves on being leading edge in technology, creating some incredible value, and being an example for other sports and entertainment organizations with what we can do. We have amazing tech partners. We're in the hub of innovation. We really try and take it to the next level with how we can use technology in sports and entertainment.
George Kurian, CEO, NetApp: All right. You guys are obviously all NetApp customers. Maybe you could talk a little bit about what you're using NetApp for, what role it plays in your environments.
Ananda Baru, Loop Capital: Yeah, I can start. NetApp has been with us a long time. When we choose NetApp, we choose it because of the case is like a comprehensive system approach. We are a hybrid cloud. Because we are a media company, you'd figure a typical NFL game is 1.4 terabytes of video captured. We have different workflows that need that video. In the media game, our officiating workflows, our workflows around media production, those are all high IO, low latency workflows. As we try to incorporate new technology into those, like AI and computer vision and other things, we need those at high performance. We're playing another 280 something games throughout the year and storing those 1.4 terabytes of video forever. All the other type of media that's clipped around the game.
On top of that, we take data from sensors that the players wear, a new skeletal tracking system, with 32 cameras around the ring of each stadium. We have to time slice and synchronize all these sources of data together and do them in a performant way, do it on-prem and in the cloud. You want a comprehensive system as you're managing this, both for archive off into the back end at the highest kind of efficiency cost ratio, and at the high performance on the front end where our applications live. We're also moving it between clouds because we do have multiple clouds between AWS, Microsoft, and others. You need one system approach versus having multiple other systems through other providers, and they're not working well together.
Just a similar situation we have also in Formula 1. I think the problem statement in sport is pretty similar. Obviously, on our side, there is a huge amount of, there's a huge density of data in terms of telemetry. We got telemetry from the car, from the engine. We got data coming also from the video streams. We got strategy data. Strategy data. We got at the factory, we got PLM, CFD simulation, wind tunnel data, Dyno. The amount of data that this creates and the latency and the density and the algorithm that we apply on top, statistical AI became de facto the differentiator between the teams. The investment in that area became bigger and bigger. At one point, you start to win or lose if you don't get that kind of technology. It is de facto standard in Formula 1 in all teams.
I mean, my previous experience was another team. It is the standard because of that kind of problem statement. You have the races, which are similar to the event you were mentioning. At one point, you have to displace this all around the world. You got all the connection, the network. You had the control of the engineers in Silverstone, where we have our factory. This requires that kind of storage system, that kind of intelligence on the storage, the kind of metadata management that makes you win or lose.
Lou Mitsocha, Daiwa Capital Markets: I mean, from our side, we have a few different buckets, which is critical to us. From a foundational approach, in the offseason, we did some extensive renovations to Levi’s Stadium. One of those was putting in the world's largest outdoor 4K video boards. That creates a huge amount of data, and that data has to be available quick when we're looking at the multimedia component. We had a 10-year-old system, and we needed something better. We went out, and we needed the best in the business. We cannot afford to go and try things or give someone a chance. This is something we have 70,000 people there on a game day. We have millions watching around the world. We have one chance to get it right.
We had to go with NetApp as we moved that data across the network and displayed on the video boards and create that experience for our fans. With 70,000 people, you can imagine the amount of data that we get. We have 10 years of it being in Levi’s for over 10 years now, and we do a lot with that. We have an executive huddle where it shows us in real time what's going on in the stadium, from when you park your car to getting through the ticket canopies to concession stands to even how much utilities we're using in the stadium. It's an incredible amount of data that goes through. Being in real time, we need that fast, and we need it to be reliable as well because we have, again, 10 to 12 games a year and another 10 to 12 concerts or more.
We don't have the luxury of some other sports where they have 200, 300 events. We have to get it right at that time, or we lose an incredible chance to excite our fans and a revenue-producing opportunity. That's why it's extremely critical that we have the right data at the right time and have it reliable. As we look forward, being at Levi’s Stadium, and I mentioned it earlier, we have to be at the leading edge of technology, not only for our fans, for our partners, and even for our team, because the ultimate goal is, of course, to win a Super Bowl. I like to call it the intelligence stadium. How can we use the latest and greatest technology, obviously, AI, machine learning, and data to be able to now start getting predictive of what we want to do? We're great at iterating in real time.
How amazing would it be if we created a full frictionless experience? From when you're at your couch at home, you know when to leave, when to get to the stadium, where to park without those delays because that's your first impression. When you get there, we want you to arrive happy and leave happy, unless we lose. We also want to make sure that we have the right amount of inventory in stock, not too many hot dogs, not too little. The beer on tap, because that's critical for us. We don't want to create waste at all. Utilities, I talk about that. We use an incredible amount of electricity and water in our stadium, and it's important for us to be good climate citizens.
To be efficient there will not only save us money, but it's green and it helps us protect the environment and set an example for others. These are some examples that we use it. Going forward on the football operation side, they have an incredible amount of data to use, and they use it very quickly. We need to have the best powering, the best foundation to be able to deliver that so that they only have to worry about getting wins. The fans only have to worry about enjoying the game. It works like a referee. They're best when it's not noticed.
George Kurian, CEO, NetApp: All right. Got some questions with Lou.
OK. For the Lume Associate guy with Capital Markets, for the football players, can you have any comments or help for the New York Jets? OK.
Lou Mitsocha, Daiwa Capital Markets: All right. Lou, you got it.
You talked about obviously a huge amount of data being created on a daily basis. Is your purchases of storage linear to that? If not, what are you doing in order to try to manage your data so you just don't have a massively increasing, even though obviously we all at NetApp would enjoy a massively increasing budget, just trying to understand how you manage the growth.
Ananda Baru, Loop Capital: Yeah, I wish it was linear on the league side in the sense that, yeah, we know it depends on which video format it's captured in by the broadcasters plus us. We set a video standard plus us, and we share video to all of the clubs. The league is replicating video that's used for game preparation, so that's pretty linear. We add new technologies. The 32-camera ring I talked about, that went in just this year into all stadiums. It was only in six stadiums for POC last year, and the previous year was just R&D. We didn't know that was going to be successful or not. Now you're trying to make a storage purchase on we have six cameras that do ball measurement that are 8K, and the remaining are all 4K cameras. These are huge data streams coming in.
You're not going to make that purchase and advance that until you're sure that technology works. That flexibility is very important to us because we have to show the ROI or value return for that. Not only are we taking all this data in and having to have the connectivity for it, we have to use it and store it. The question there is, is that technology going to drive that cost consideration for the storage on the back end? We found value in this. We find value in multiple buckets when we do projects like this. That was using computer vision and AI to measure the ball for first downs. That's just the first part. We're also measuring all the players' skeletal movements. We're saying, OK, is that a new asset? Does that create new revenue streams?
Or does it help another goal of the company, which is to speed up the game or improve the game itself, assist with officiating? That's a really big goal of ours. Does it help with efficiency? Does it create new efficiencies that either lower other costs or eliminate manual tasks? Those are all the factors we're going into when we make these investments into the infrastructure to store that vast amount of data. It is kind of like a pop cycle where we might sit for a couple of years on what we've established, and then a new technology comes out, a new need comes out, and it pops. It just really increases the amount of storage we need.
Lou Mitsocha, Daiwa Capital Markets: I mean, on our end, we try and forecast when we're originally looking at what we need, what we currently need, what we thought our growth was going to be. Technology, like Aaron said, it's not linear. It goes like this, like this. You don't know when the next storage hog is coming or whatnot. We like to use a hybrid approach. I mean, it's always good because we can add what we need when necessary. Rob's always happy to take my call to add storage. It's important for us to understand what we need, but knowing that we're going to have to grow later. We can't have a closed system. It has to be something that will grow. There's always workloads that are better for cloud. There's always going to be workloads that are better for on-prem.
We have to make sure that we find that balance and we use the best use cases for each and have that ability to scale in the future because you can forecast, but we never know what's going to happen tomorrow, right?
Ananda Baru, Loop Capital: We're never throwing anything away, right?
Lou Mitsocha, Daiwa Capital Markets: Yeah, exactly.
Ananda Baru, Loop Capital: I mean, literally, our player health and safety algorithms that we're developing are going back over historic footage that we've captured in NFL film over 100 years and trying to compare safety and injury and all these different stats with new algorithms and AI as it emerges. We go back and use the data we thought we archived off forever all the time.
George Kurian, CEO, NetApp: All right. Wamsi, and then I'll get to the guys in the back.
Thank you for doing this. Wamsi Martin back from America. I guess from each of you, to the degree that it's different, I was wondering, as you think about high-performance storage versus cold storage, it sounds like for some applications for you, certainly what you're capturing and reviewing quickly and streaming out, maybe you need high-performance flash for that. Other things might be cold storage. Can you give us some sense of how your environment looks split between maybe what you would call as hot/warm or cold storage across your install base? Secondarily, as you see the pricing of, I mean, people talk about HDD shortages and memory shortages. How much does that concern you? How do you think about planning for that? How do you manage those cost escalations in your negotiations with NetApp?
Ananda Baru, Loop Capital: You have more sensors, and you run at a faster pace than even we do, right?
First of all, on the different level of storage and what we decide, in Formula 1, we tend to have a moving window that is dependent. The length is dependent on the technology improvement that we have. For example, for 2026, we have an upcoming structural changes of all the rules, the car, the fuel, the engine, the tires. That one, in principle, decides how much data you want to store on the fast and how much you want to kind of pull and cold store. Sometimes we use even an old backup system that we can retrieve only when we need it. The magic is trying to understand this together with the engineers. We have also to adapt the strategy continuously, what you were saying before. Take a decision now, review on an almost monthly basis, and then come back to an alternative solution.
For us, the financial negotiation is easier because NetApp is not only a sponsor for us. It's one of our partners. We work hand in hand, and we're very close together. We have a direct link to senior management. They want us to get in better in the engineering and win. We are on the same journey. Let's say it's less complicated than a classical approach to what most probably your use case is.
Yeah, I'd say we have a mix of all-flash arrays as well as storage gateways for the backend archive. These are going to change based on workload, right? I think part of the strategy is making sure that our storage engineers understand what to manage, what workload where. You're profiling your workloads for your environment, and you're also trying to understand how hybrid cloud fits into that and what workloads are happening in the cloud. Sometimes that's with a partner. More and more you see, like, you may not control all your cloud environments. You're sending data or sharing data and video and content across to another partner who's in AWS, and you still have to get something back from them. We are taking video back from them after they enhanced. Sometimes that goes directly onto on-prem storage, and sometimes it's going over to our cloud.
It gets pretty complicated, this management scenario. Really, it's by profiling what our workloads are, figuring out what's the right performance characteristic for that, and then making sure we're managing our storage efficiently because we're not going to fill high-performance storage with things that should be in archive and often other places.
Lou Mitsocha, Daiwa Capital Markets: Yeah, and similar to us, our engineers are calculating what they need for cold, what they need for hot. In terms of the cost, I think it's, again, it's a drop in the bucket adding storage and working with it compared to the investment that we have with the players on the field in the stadium and the revenue that we get back. I mean, if something were to go wrong on a game day, that lost revenue would massively outweigh anything that we need to spend on the back end for storage. In that case, we tend to value high availability environments, redundant, and secure. That's more important to us than the cost when you look at the relative risks on the other side.
George Kurian, CEO, NetApp: All right. The answer as to all things in IT, it really depends.
Lou Mitsocha, Daiwa Capital Markets: Exactly.
George Kurian, CEO, NetApp: How much hot?
Lou Mitsocha, Daiwa Capital Markets: That and reboot.
It depends.
George Kurian, CEO, NetApp: Is it plugged in?
Lou Mitsocha, Daiwa Capital Markets: Yeah.
George Kurian, CEO, NetApp: All right. Fredrik and Ananda have questions.
Actually, first question would be, where can I get one of those jackets? That was pretty nice.
It's actually dumped at the gift shop.
There's a gift shop.
I was saying, hey.
Make sure you exit through the gift shop.
Thank you. Appreciate it. Important question. I'm curious. I've heard the word unified so many times today. I would love to hear from an actual NetApp customer in terms of how AI-ready you feel like your internal data state is in terms of how unified that is across the organization and how you expect NetApp to maintain that unification of the data state if it is fully there already. If it's not there, how do you expect NetApp to help you achieve that?
I'm going to tack on to your question because it's one that I've had. All day today, we've heard about the importance of breaking down silos to make sure that your data is AI-ready. How do you think about that, and how is that impacting your underlying infrastructure?
AI-ready and the journey of AI. Formula 1 is an engineering business. To a significant extent, we started the AI journey when it was not even called AI or was not known as AI. Formula 1 already in the past did tend to manage a huge amount of data sets in an engineering way with lots of statistical analysis and creating models on top of it. What the machine learning approach gave to a Formula 1 team in engineering terms is something that is already present. You have an intrinsic engineering problem that intrinsically is structured already to be analyzed in a statistical way. This is before AI. When the amount of computational power allowed us to enter a very sophisticated multi-dimensional machine learning, the readiness was already there because the data was intrinsically that kind of data. The journey is complex.
For the nature of the business in engineering, we were on the front line from the beginning on. A partner like NetApp for us was the only way, is de facto the only way because you have to crunch the data sets. There are immense, not that much the amount of data. There are different streaming feeds. It is what's inside the data. There is an element of information about one engineering problem that could have 20 dimensions. It spans through multi-dimensional spaces. The sweet spot of the car in terms of setup could be in an area that you even didn't think about. This kind of analysis, we were doing this all across the last decade.
De facto, NetApp was the only way to do it because the amount of data you had to crunch at that speed and not to be, let's say, limited by the storage factor was critical. That's the reason why in Formula 1 it is a de facto standard. It was a standard more than almost 10 years ago. I'm not sure how about NFL you can answer. It was kind of never a discussion if it makes sense.
Ananda Baru, Loop Capital: I think we've been doing machine learning a long time with the sensors and some of the other aspects of optical tracking that we've been doing for many years. I feel like this is always like a trick question. You say, how do you know you're AI-ready? You know when you're not because something fails or doesn't work. You don't really know when you're AI-ready for everything that can possibly come. We spent a lot of the media workflows were very specialized in the past. Whether you were working for football purposes with media or you were working in our NFL network and producing for games or you're working with NFL films and trying to produce the long-form content, there was like a whole specialized workflow around metadata around that. It was very manually and manually tagged, right? They had these vast deep archives behind it.
When someone in the social teams department wanted to post to all the different channels or someone in our marketing department or someone wanted to create something with partners for PR purposes, they were going to these specialized roles and saying, go and source me up clips around Tom Brady and all the Super Bowl wins he's had. Now multimodal AI comes in. Now we no longer need to use metadata tagging potentially to find and source video. Not only that, it's richer and faster. I can go search for, show me Tom Brady with no helmet on with a Pepsi sign behind him because that's our sponsor at a post-game trophy ceremony, right? That was something you couldn't really easily metadata tag for before. The social department can go and find these things on their own.
All of that is stored in these specialized stores that's behind where all the media workers are. Now it's been not so much that we have the wrong storage or the wrong architecture. In fact, we have the right choices there. It's exposing it at the right layer to these applications to be able to use it and access it. You have to watch that, now I have all these other new stakeholders coming in and hitting a storage area that was scaled for one department's use, right? Kind of contain. Now I'm exposing it. I also want my fans to be able to hit that and use multimodal AI to source up clips from our historic archive. Now we're going to place that so it can take a public workload.
Some of this has been we're going to expose things that were once cordoned off and specialized at different layers across the company. We have to look at our architecture and make sure it scales for that. I think that's the importance of having kind of one partner cohesive system across so that it can scale versus we bought the cheapest storage for each of these places because that happens in a lot of media companies. We buy whatever is cheap this year. We go and we dump a whole bunch of video on it. It's just the people in the video department that know how to pull it up. No, we need something that can instantly pull up video across the organization.
Lou Mitsocha, Daiwa Capital Markets: Yeah, I think it's an important question that you ask in terms of data hygiene. Because if we look at even a year ago, it was thought that, OK, you can just dump anything into the system, and the AI will figure it out. It'll give you an answer you want. What we've seen is that there's a tremendous amount of bias in there. You have to have, just like we've had in the machine learning era, just in any era, you have to have clean data in there to give you the right results. What we do in our organization is we have a group called the Intelligence Stadium Committee. It's a bunch of decision makers and subject matter experts in each different department. We all sit around once a month. We have a group channel.
We all have ad hoc conversations where we look at what are the prioritizations of the organization, what data do you have, and what are you trying to accomplish. Taking this understanding, we look at, OK, can we apply AI, machine learning, any tools into that and work to enable your business so that it doesn't just look like it's an IT project coming through for AI for AI's sake. We know AI gets thrown around everywhere, rightly or wrongly. We actually want to have good use cases. We want to have the right data behind it. We want to have that data integrity checked because AI still does have a tendency to hallucinate. We want to make sure that we have the right people checking to make sure that we have the right results. Data hygiene is incredibly important. It's not a case of being AI-ready.
It's always an iterative process where we're always looking at it, always improving, always trying to find the mistakes. Once you take your eye off the ball there, that's when you get errors into the system. That's where everything breaks down.
George Kurian, CEO, NetApp: All right, Ananda.
Yeah, thanks. Ananda Baru, Loop Capital. Thanks for doing this panel. This is a pretty cool panel. Each of you have spoken about how critical video is to your respective businesses. Is there something about the heavy use of video that makes NetApp particularly attractive to you guys?
Lou Mitsocha, Daiwa Capital Markets: Yeah, I mean, for us, when we talk about the 4K video boards, that's a lot of data. When we talk about 8K now, when we talk about the resolution that the players need when they're scouting or practices or things like that, that's a lot of data that's needed quickly. We look at NetApp. I actually, DreamWorks uses them. If it's good enough for them, it's good enough for us. They have some heavy workloads. It's important that that data is quick, that that data is reliable. They were really, when we went out and did our research, the only ones who could handle the workloads that we do because the NFL has always been a video-heavy sports league. It's built on that more than a lot of the other leagues. What we just do in stadium with video for our fans, it's an incredible amount going incredibly quickly.
We have to have something reliable and highly available.
Ananda Baru, Loop Capital: Yeah, so like the officiating department, for example, is going to scrub through video. They're going to high frame rate video. They're going to take it, and they're going to move a dial to go in slow motion back and forth, front and reverse, right? What's important for them is that it's not dropping a frame. It's not skipping. It's not a jerky movement. They're doing that to make a decision for the game that's live. That's a replay moment. They want to make that decision in the AMGC, in New York, and seconds count in this matter. Any sort of performance issues with the video is going to be a problem for us. Also, actually, the reliability and robustness that it's there. I think one of the things that we can all sit up here. NetApp's a sponsor of the league.
NetApp was a sponsor of the league in the past. There was a point where NetApp was no longer a sponsor. Now NetApp's back. We did not stop using NetApp in that interim period. I think that's more important than NetApp being a sponsor of the league is we had other storage vendors come to us and say, we'd like to be a sponsor. Take our storage. We said, no, we're happy with what we have, and it's performing. Now we're really happy that NetApp's back because we get to tell the story together. I get to bother them for product and engineering, which is really what I want to do, right? I want to have that product deeply ingrained and partnered with us. What we've seen is that quality. Then we add in the protections of the cybersecurity layer.
We're very happy with the ransomware protection and other features while it's still performing. I think other platforms don't give you all those things. You feel like you're picking fast, cheap, and performant. You're getting all three, right? The old adage is you only get two of that type of equation.
Similar situation also in motor racing. It's an evolution of the last 10 years. Come across together with the amount of cameras that were put everywhere on the cars, different perspective. Obviously, tracker, but one on the helicopter. There is an amount of information there. The other one, which is pretty relevant for motorsport, is also the audio channels, which are also very important between drivers and engineers. That kind of information is not classical vector information you got in engineering. You need to do some massage. You need to do some analysis on the data. They come on that quantity, so big. You need to extract the information that you're searching for. It's an area where, again, NetApp is not, I mean, de facto the only way to do it for us. There were similar conversations also in Formula 1.
There are other companies, but they would never even pass the level one of tender conversations. It is so much the amount of decision that you need to take based on that kind of engineering data, video streams, GPS streams, got lots of data getting reached by the position, weather data. If you compromise there and you try to save some capital investment, which you could, and then the finance department is happy, you pay a lot. Other teams are not doing it. At one point, you see it. It's something that you may not see directly in the car performance. After a while, you start to feel it. Then you start to feel the difference with the other teams. It is pretty clear that that's something that you can't compromise, if it makes sense. Something similar that you would say, yes, you can save here and there.
Are you really saving? It's a good question. It's more or less the same answer across the whole Formula 1 and across other sports. There is a story behind that, that in these kind of cutting-edge scenarios, which are sports scenarios that are very sophisticated, meanwhile, lots of cameras, lots of events, geographically located everywhere, connected with then central teams in different cities. Like you were saying, New York, we got lots of centers in the UK. If you don't have that kind of infrastructure, you start to pay a lot for something that you think you're saving, but the saving is in principle something that makes you lose the sport. In sport, it's about winning. There is no, I mean, the return of investment in sport is something different than from a normal financial business or a commercial business.
If you don't know winning, if you're second, it's the first of the losers. You start, that's a quote from a famous Formula 1 person. Just say that you don't compromise there. You compromise somewhere else. We compromise in IT, in lots of other departments and areas. There are two or three you can't compromise. Cyber, you can't compromise. You compromise there. You risk your own business. Storage, you don't compromise de facto.
George Kurian, CEO, NetApp: All right, Tim.
Lou Mitsocha, Daiwa Capital Markets: Hi. It's Tim Long. Maybe quick for each of you. A lot of new product announcements up and down the businesses for NetApp today. Just curious, for each of you, did anything resonate as something that fills a need or something that you're excited about adding to the portfolio? If not, anything in the last year or two that did fit that bill of something that really addressed the key pain point that you really wanted to deploy quickly? I mean.
For us, I'm excited to kind of go through them because there's a lot out there. It's fantastic. I was meeting with some of the leadership this morning. It's like, OK, how are we going to put this in here? What can we use? How can we kind of do it again to hit our business goals and create some value? We're definitely going to be taking the time with the announcements this morning, extremely exciting, and see how we can incorporate them into our business and start working to. Our goals are excite our fans and win Super Bowls very easily. Two very easily said goals, not as easily done. Let's see how we can use these. It's exciting that there are innovations coming every year, that there are iterations. We are moving forward, which is something that we wanted. We wanted more than just a storage company.
We want someone who's going to work with us and start using this and start generating some value. Very excited to dive into it. Don't know anything specific that we want to do yet. I know when I'm back here next year or sooner, I'll tell you how successful they were. I'm sure they will be successful, hopefully with the ring on. We'll see.
I mean, the one that was announced in the keynote with NVIDIA was interesting. The two companies are part of a journey that is happening all around the world in different business, in different areas, but with a very consistent approach and pattern. Both companies find themselves in a situation where there was an evolution. NVIDIA, we all know the story, that started doing something different. At one point, the capacity of the computation on the GPU side. For NetApp, the capacity of the storage reached a threshold or a kind of point on the return that all of a sudden they created a new business on top of what storage was before because storage is something that is in computer science, in IT, more than 40 years.
At one point, however, on top of the storage itself, it's like for NVIDIA, on top of the graphic card itself, which was a completely different business, something new happened. Something emerged out of the storage, out of the GPU. I think the two companies together can create something that can emerge in a very interesting way in the AI space because we know very well AI is extremely intensive from the storage and extremely intensive from the GPU perspective. For the engineering problem statement that we have in motorsport, it's complex. The complexity of fluidodynamics for aerodynamics research, for engine research, for gearbox research, for key tires, or for strategy, they do require an incredible amount of data and an incredible amount of computation. I think that collaboration that yesterday today was presented may emerge something very interesting in the next five years.
For us, in terms of motorsport, it's something extremely looking at. Like I said, in 12 months, it would be interesting to meet again. There could be some use case very interesting.
Ananda Baru, Loop Capital: Yeah, maybe you all can help me with our problem. Today we were talking about the massive amount of data we have in different areas from different sources that needs to be synchronized to then high-quality video. Some of that video is public that you see in the broadcast, some of it is from these other cameras we've talked about. We're going to create derivative products of that. What I mean is we were talking today with an engineering team that goes across the whole organization about a central data hub. When I hear all the announcements today, I'm trying to think, and all your analyst brains, help them together here with me. We'll compare the answers you give me to ChatGPT or to Gemini or something. How do I put this technology together to accomplish this goal? Taking all these data sources in, sensors, cameras.
We actually have humans who also still score the game too because computers haven't yet figured out who has credit on the sack. It's probably coming. We have to put this into one place. It's being used in other use cases. We talked about player health and safety before. They're going back post-game, looking at that data, looking at the skeletal model data. They're actually adding a muscular structure because the skeletal data literally is a skeletal model. They're using another algorithm to add a muscular structure. They're comparing it to the real video from all the different angles we have, right? We're saying, OK, that's really cool for player health and safety. Does that maybe have a use in the future in AR, VR, and gaming?
I want to put all that data and all that source in a place that can be accessed once because the current problem I have is we're storing places in so many different areas. We're duplicating, right? We're duplicating to go across to a partner who's developed this technology. I want to expose it to them and not let them take our data out of our data centers and out of our cloud. That's what we're looking at NetApp for in these cases of, OK, these are going to be all AI-driven use cases. These are all machine learning, other types of technologies. I want to centralize them and have them be exposed for both internal and B2B types of purposes. Eventually, they may be downstream to B2C. That's what I'm listening for in these keynotes and in these announcements.
I'm going to have to see how these things play out. Definitely appreciate all your input on what we might build out of this.
George Kurian, CEO, NetApp: All right. We have time for one last question. It goes to Mike.
Gagan Gulati, SVP and GM, Data Services Group, NetApp: Thanks again for being on the panel. This is Mike Cadiz from CITI. In the multi-cloud environment that we're in, how do you decide between first-party marketplace NetApp offerings versus the competing hyperscaler? It could be very well NetApp, and if so, why not?
In a Formula 1 app?
Ananda Baru, Loop Capital: I mean, consistent governance and controls is one thing, right? You're not, like I was saying, all these other buckets before. If you want to have consistent governance and controls, you have to have one platform or at least one logical rule set that can apply across all these different environments.
I mean, those meaning there were similar questions before. This is something that we even don't have the luxury to have the question. The problem statement is so complex from the data density, from the speed of the data, from the need to extract engineering data that as by now, and there are 11 teams next season, 10 teams currently, plus the FIA, plus the FOAM, plus the whole Formula 1 endeavor. We are this weekend in Austin. In a couple of weeks, we'll be back in Vegas. All of them using the same technology. I think de facto we are talking about a company, again, I mentioned another company before, that find their right way in the last 30 years across something that was exploding all around the world. I mean, it was working in the financial industry.
It started to become very complex in the 2000s, the amount of data that was coming from all the market. The financial industry was a bit ahead of the game. At one point, some companies got it right. It's in terms of how they had their production, how they have the distribution, their sales approach, the way they do the contracts. Thirty years down the line, they create in our area de facto monopoly. It's so important, that problem statement, you can solve it like this. It's not that we didn't have a look to other solutions. I'm not sure how much you did.
We have, yeah.
At one point, the decision comes pretty straightforward. You look at the alternatives, and you look at the problem statement we got from businesses and say, OK, you can look at different investment schema, leasing schema, and so on. We got all the possible for us, plus they're a partner, so it's even more. We're pretty confident, pretty relaxed with that. To be honest, managing IT department, I have other problems. This is an area that for me is consistently performing over more than a decade. I'm confident the company has the right management, the right approach. They did it right. They did it right again. They're doing it right now with AI. It's a pattern. They are managing right their business, and the results are there on the market, I would say.
Just how it evolves after we discussed this 25 years ago, we would have a completely different conversation. This is the same for operating systems, for CPUs, for RISC and CISCs. These kind of things, at one point, they converge. At the current state, they've done it right and are very, very happy.
Lou Mitsocha, Daiwa Capital Markets: Yeah, and I mean, for us, we have the luxury of being an NFL team, being in Silicon Valley. We don't have to just take anyone who comes along. You want to be a partner. We looked at everything. We looked at the competitors. This was the best of the best for us. When we're looking at a cloud hyperscaler environment, since we have that hybrid environment of on-prem, we wanted to make sure, obviously, security being the most important, the high availability, it has to work. We wanted to stay consistent. That's what we found the best. The less that can break, the less chance of breaches is where we wanted to do because it has to all work. It has to all work easily for us. I'll go back to that early point.
We were lucky enough to have our pick and go out there and say, who's the best of the best? Who fits into what we're trying to do? Not just a case where I've been in other organizations where partnerships say, here, these guys want to be our partners. Go take them. It wasn't this case. It's like, let's go the best. Let's create authentic experiences. Let's use them. That creates a real partnership. Then we can work together as actual partners.
Ananda Baru, Loop Capital: Never underestimate the friction of reskilling. When you have your engineering team and they have something precision working right, we're not going to go and move things across multiple clouds and change back ends and switches and controls just to get either something cheaper or do something because it's not supported. We're going to ignore that option because we don't want to have an environment that has risk of we didn't understand it. Therefore, it failed on game day, right?
George Kurian, CEO, NetApp: All right. Aaron, Fabrizio, Costa, thank you guys so much. This is so fascinating. I could keep you up here for like another hour. I know you have things to do. I really appreciate your time. Thank you.
Thank you. Thank you very much.
Thank you. Thank you. Best customer panel ever, hands down. All right. We're definitely ending today's session on a high with some great real-world stories about the value of the NetApp data platform and the importance of having consistency of your data across hybrid multi-cloud environments. I know you guys have heard it from me for a long time. I'm glad that someone else said it. For everyone on the webcast and for those of you here in the room, there's another general session that'll be available for streaming tomorrow morning from 9:00 to 10:30 A.M. Pacific. If you missed this morning's general session, you can catch the replay. All of that is available. Links are available from the IR website as well as from the Insight website, which is just search NetApp Insight. You'll get all the links.
I thank everyone for joining us here and in person and on the webcast. Thanks, everyone. Have a great day.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
