Trump-Zelensky meeting ahead, Fed rate outlook in focus - what’s moving markets
On Tuesday, 29 April 2025, Palo Alto Networks (NASDAQ:PANW) hosted a conference titled "Hello Tomorrow | What’s Next in AI Security." The event highlighted the company’s strategic focus on integrating artificial intelligence into cybersecurity. While showcasing positive advancements in real-time threat detection, the company also acknowledged the challenges posed by increasingly sophisticated AI-driven attacks.
Key Takeaways
- Palo Alto Networks is investing heavily in AI infrastructure to enhance real-time threat detection.
- The company is ingesting 11 petabytes of data daily to improve security solutions.
- Introduction of AgenTix, a platform for developing custom security agents.
- Announcement of the intent to acquire ProtectAI to bolster end-to-end AI security solutions.
- Emphasis on collaboration with partners and customers to refine AI-driven security measures.
Operational Updates
- Palo Alto Networks is processing 11 petabytes of data daily to support its security solutions.
- Deployment of over 100 Xi’an solutions to customers, with nearly 300 sold.
- Cortex XIM features more than 10,000 detection models, including 2,600 based on machine learning.
- The newly introduced AgenTix platform aims to develop custom security agents.
- Plans to integrate AI and automation into vulnerability management and email security.
Future Outlook
- Palo Alto Networks aims to drive intelligent integration across the cybersecurity industry.
- Focus on developing cybersecurity agents for network, cloud, SOC, and threat management.
- Strategy to provide solutions and gradually transition to autonomous operations.
- Future enhancements in AI and Agentic AI will require a fusion of AI and automation.
- Anticipated growth in agent-based AI systems will necessitate robust security measures.
- The company plans to acquire ProtectAI to enhance AI security capabilities.
Q&A Highlights
- Ian Swanson, CEO of ProtectAI, discussed the urgent need for comprehensive AI security solutions.
- The importance of securing AI agents and their integration into business processes was emphasized.
- Concerns were raised about new risks from AI agents, stressing the need for safety and trust.
- Palo Alto Networks aims to be the trusted partner for securing AI at scale.
For more detailed insights, readers are invited to refer to the full conference transcript.
Full transcript - Hello Tomorrow | What’s Next in AI Security:
Kelly Walder, CMO, Palo Alto Networks: Well, good afternoon, everyone, and welcome. I’m Kelly Walder, CMO of Palo Alto Networks, and we’re thrilled that you’re here with us during one of the busiest, boldest, and noisiest weeks in cybersecurity. It’s a week that’s packed with big announcements, new buzzwords, ambitious promises. But let’s be honest, sometimes it’s hard to tell the signal from the noise, and that’s why it means so much to us that you’ve chosen to be here today. But we know you’re not here just for the latest trends.
You’re here for real innovation, for solutions that aren’t just louder but smarter, Not just flashy, but genuinely transformative. At Palo Alto Networks, we believe cybersecurity isn’t just about staying ahead of threats. It’s about delivering clarity where others deliver complexity. It’s about building trust, reliance, and true operational impact, not just another headline. And it’s about anticipating what’s next in order to secure the future.
Thank you again for being here. We can’t wait to show you what’s next.
Unidentified speaker: There’s an entirely new vector of attack in a business through AI.
Unidentified speaker: Some of the things that we’re doing together as partners are things that I thought about and dreamt about as a kid, and now it’s real.
Unidentified speaker: AI has emerged as a really powerful and once in a generation type of force. Since the invention of electricity or discovery of fire, it might be civilizationally one of the most important paradigm changes for us.
Unidentified speaker: We’re very invested in the future of AI and the technology that it is enabling. So we need to make sure people are using it in a safe way.
Unidentified speaker: We believe that the next step of evolution in our organization is artificial intelligence. What this enables is us to be more comprehensive in the responses that our team members are able to give to our customers.
Unidentified speaker: It’s very important to know how you’re using these AI tools and what data is being put into it because if that data is misused, it will affect the brand in a negative way, which is unacceptable for us.
Unidentified speaker: What we want is every employee to realize their full potential by adopting all the AI tools that the enterprise has to offer. You focus on the creative aspect. You focus on the innovation. That’s what we mean.
Unidentified speaker: If their data governance and auditing is not in a good place and they allow AI applications to consume this data, allow employees to generate insights, they’re basically sitting on a ticking time bomb.
Operator: Please welcome chairman and CEO, Nikesh Arora.
Nikesh Arora, Chairman and CEO, Palo Alto Networks: All right. Welcome, everybody, and thank you for being here. As Kelly said, you have a lot of options to look around and see all kinds of different cybersecurity solutions, and we’re glad you chose to come to the platform vendor of the industry. We appreciate it. The good news is we’re gonna talk about what everybody’s talking about, which is called AI.
The bad news is my team’s told me they’re gonna do all the demos and tell you all about the cool stuff so I’m supposed to keep you entertained for fifteen minutes without actually saying anything substantive. It’s not hard to do. But we’re going talk about AI. We’re talk about AI, we’re to talk about data. We’re going to talk about the fact that our platform vision is still alive.
A few years ago, about eighteen, twenty four months ago, we told you that we felt the time for best of breed was slowly going to migrate towards the time for the platform. We’re seeing that in spades everywhere. We have come to a point where we have two platforms: our network security platform and our Cortex platform, which effectively is our platform which, in our mind, replaces the next generation SIM or the SIM of the future because that’s kind of where all the action is. Now, what’s changed in the last twelve to twenty four months is that with this conversation about AI, two major changes have happened. One, AI is becoming a tool not just for the good guys, but also for the bad guys.
As a consequence, the time from when a bad actor decides to focus on you as one of their people they want to go after, the time from when they decide that the time when they can get in and exfiltrate your data has compressed under an hour, which means we’re getting more and more close to real time and real time protection as you need to be. As a consequence, the entire industry has to pay attention to how do we go from the traditional mechanism of protect what you can, send everything else somewhere else, and have that analyzed and eventually take some time to figure out what actually happened, how to remediate it. You don’t have the luxury of time anymore. So what you’re going to see as a recurring theme, not just today but over the next few years, is how the industry has to pivot and go towards more and more real time as possible. And what will experience is that we have been on this journey of what we cannot stop at the edge, let’s make sure we can analyze and go back and protect as quickly as we can.
We can remediate, we detect and remediate, and protect as quickly as we can. So one theme, which I expect you’re going to constantly see, is the idea of getting as close to real time as possible. The second thing which you’re going to hear about, which I think you already know but you’ll keep hearing about more and more, and that’s in the broader context of AI. All of us in cybersecurity exist to go deliver AI solutions to our customers because we hear the big thundering noise of people wanting to spend $350,000,000,000 to build infrastructure faster than any piece of infrastructure has ever been built in technology. Think about it.
Twenty four months ago, we’re all wondering what are we going to do with chips, what is going happen with the supply chain crisis, what’s going to happen with the pandemic. And today, we’re talking about where most large tech companies are boldly claiming they’re going to spend 70, hundred, hundred and $50,000,000,000 building data centers, where there was no inkling that this was going to be something that was going to be relevant about any year or two years ago. What’s going to happen? What is going to happen when three fifty billion dollars of AI infrastructure is built? Well, I get it.
They’re going to build some amazing models. These models are going get smarter and smarter and smarter. And we’ll achieve AGI at some point in time. It’s not my job to figure out when. That’s what they will do.
But our job as cybersecurity professionals is to figure out when those models, when that AI that’s being built starts getting used on a more sort of ubiquitous basis. I’m pretty sure that every one of you, whether you work in cybersecurity, outside of cybersecurity, or you work in traditional enterprise, you have some experiments going on in AI. How do I take what’s being built by these amazing companies, these models, how do you translate that into something useful for our enterprise? And we’ve all heard the use cases. There’s a whole bunch of work going on.
I predict the next three to five years, almost every SaaS application that we know today is going to have a different manifestation. Some of them will have AI assistants. Some of them will have AI agents that will talk to other agents. A lot of the UI that we know today as our UI or front end for SaaS is going to have to morph. Now when that begins to happen, that means we’re all going through a large transformation, whether we’re a tech company or not a tech company, whether we’re a traditional company.
In that transition, we’re going to be looking to see how do we take the fundamental building blocks of AI and embed them in everything we do. When you embed them in everything you do, that makes all of us in security wonder, well, what’s different about AI? What is it that is unique about it? And how do we need to prepare for a future where we as cybersecurity professionals have to figure out how this is going to impact our lives? How is it going to impact what we do?
And what’s fascinating to think about it, the SaaS application outcomes are predictable. You know what you’ve programmed. You know the output you expect. In the case of AI, it is going to be constantly learning. The answer tomorrow will be different, and the answer in two weeks will be different perhaps better, perhaps even better, more precise.
But when you have something that has a mind of its own, if you know how that works, with a mind of its own, you’ve got to inspect it as you go talk to it. You’ve to inspect the output it brings out. So the idea of security will change. You have to constantly test those models, test those applications to make sure they’re not going to go rogue on you in some way, shape, or form. So is that kind of thinking that needs to be deployed amongst the entire cybersecurity industry to try and figure out how AI is gonna change, a, our products.
Our products will become very different because we’re also, in some version, a SaaS business in cybersecurity. Our products will have to start dealing with natural language interfaces and some version of copilots or AI copilots or autonomous AI drivers. At the same time, we also have to make sure we understand when our products start building a mind of their own, a brain of their own, how does that impact what we do? And that’s what our team is going talk about today, in terms of how do we make sure that these developments in AI can be harnessed by our customers in a way that they can go ahead and deploy bravely. So we’ll talk about that.
The other thing which we’re going talk about is, over the last two years, we’ve been building a platform, both on the network security side as well as on the Cortex side. What we started to talk to ourselves about is that the industry has spent a lot of time building analytical capabilities and saying, here’s what I found. Dear customer, dear SOC analyst, dear network analyst, dear cloud security analyst, look what I found. And they say, good luck. Now it’s your job.
I did my job. I found all these amazing things for you with your problems, and good luck. And the analyst wakes up in the morning, starts going to a list solving the problem, and the analyst goes to sleep. The analyst wakes up in the morning, next day and says, good morning. Look what I found.
And you start playing rock a molly again. And the next morning, and so on and so forth. Well, we’ve decided we can’t do that anymore. If you want to get to real time, you have to get in the business of not just identifying problems, but solving problems. So what you will see is we flipped our bit internally.
Our products are now more inclined to say, here’s what I found, and here’s how I can help you solve it. It is sounds very simple. It is a fundamental shift in the way we’re thinking about the future from a cybersecurity perspective. And you will see over time, all of our products will come with recommendations on how to solve the problem and over time allow you an embedded automation that’s gonna help you solve the problem. That embedded automation, the best way to think about it is, if you remember the early self driving cars, or which were not self driving, you started to see some elements of technology.
The car would tell you when you’re about to bang into somebody in the back. The car would tell you when you should apply braking because it about hit something in the front. What is that? That’s a little bit of assist. Right?
It wasn’t called a copilot. It wasn’t called an autopilot. It’s just called a little bit of help from technology. That’s kind of phase one. What did that turn into?
It turned to a bit of a copilot. So let me take the car over for you for this stretch because I know this road. I can drive straight at 65 miles an hour, and that was called a copilot. Then he got to a bit of an autopilot. Then he got to full service driving.
So I think if you think about autonomous cars, they’re showing us the blueprint on how automation and AI is going to manifest itself in our products. And what you will see from Palo Alto Networks across our platform is we are beginning to embark on that journey. You’re gonna see us start recommending solutions to you. We’re gonna learn with our customers, our analyst friends as to how those recommendations manifest themselves into continued automation. From there, we will see, but you’ve done it so many times the same way.
Do you mind if I take over? Not for a point in time. We’re 70% automated, and we spin up spinning up SOC agents and network security agents, but it’s gonna have to take that journey in the industry for that to happen. All the people out on the floor talking about agentic AI, we have to go through that journey. There is no shortcut.
If you flip over to the Cortex side, you obviously see all this capability I’ve just talked about from the platform perspective and recommendations and this self driving going to help you do in security. But what we also discovered is as we have now deployed, and I’m going to say one statistic because I’m going hear a slide, Lee, but I’m going to tell them anyway, we’ve started now we’re ingesting 11 petabytes of data a day for our customers. And we’ve barely gotten started in deploying Xi’an. We have deployed north of 100 Xi’an solutions to our customers. We have sold close to 300 of them.
So we’re on a solid journey with Fortune 50 customers who are actually deploying our Xi’an technology. But we’ve discovered something very interesting. We’ve discovered that all this data we’ve collected for our customers to help them solve the breach incident scenario is actually very useful data. This useful data can be used to solve problems when you’re not in a breach. And Lee and Gonen and others and team are gonna talk about how do we solve peacetime security problems using a wartime SOC.
And what that is going to show you is the idea that over time, there is going to be intelligent consolidation or intelligent integration across the cybersecurity industry. You’re going to get a glimpse into how, over time, with collecting all the security data in one place for an enterprise, not only can you use it for investigations, for breach response, for real time response in an incident, you can also use it to clean up your cybersecurity posture, your cybersecurity estate, and not just highlight, oh my god, we’ve got a problem, but also say, how do you think I can solve that problem? So our team’s gonna talk about that. This is a fundamental shift. This is a three to five year journey that is going to allow us to continue to automate and effectively create cybersecurity agents, whether they are network agents, cloud agents, SOC agents, threat agents over time, working with our customers.
These agents will be out of the box or perhaps bespoke as our customers build their own version of these agents, but they will come with full security capability. Not just that. You will hear from Lee what we are going to do from an agent perspective since you can’t leave RSA without having talked about AI or agentic AI. God forbid, that we don’t talk about it. I think agents are still early.
I heard that the best line in RSA is the S in MCP stands for security. Think about that for a second. All right, you guys. Tough crowd. There’s 50,000 people who are online watching this.
They’re laughing off their chairs, and this room has 200 people, and they don’t get a tickle out of them. All right. So the point, though, is I think what the industry is saying is that it’s very hard to think about agents having permissions, having ability to get something done without having a full conversation around security. And we agree with that. We think a lot more work has to happen from a security, from a permission perspective, and how agents will talk to each other before agents become a reality.
We also think to make agents a reality, we’re going to have to go through that journey of working with our customers on automation, working with our customers on assisting them, working with our customers on taking partial control to saying, I trust you. Now you can act on my behalf. Because the fundamental premise of an agent has to be, I give you agency. And you can’t give me agency until you fully trust me. And for those of you who are in San Francisco, if you go out and get yourself a Waymo, somehow you gave that car agency to drive the car.
Think how long and how much investment it took for you to trust the idea that this car can drive itself. It’s going to take the same amount of diligence, hard work, automation to actually build very useful agents which can take over and do tasks on an autonomous basis. So you’ll see the beginning of that. We are going to give you a sneak peek into the idea of AgenTix, a platform that allows you to build security agents for yourself. Now, if I say any more, they’re going come drag me off of here because I probably destroyed about fifteen minutes of speeches from all the other people following me.
If I stay here any longer, they won’t be needed on stage. So I just want to say, since many of your customers, many of your partners, many of you are design partners who helped us think so many of these things, it takes a village to get these things done. And a village not just of Palo Alto people, a village of our partners, it takes a village of our customers for us to constantly get feedback. So please keep the feedback coming. All feedback goes to BJ Jenkins, who’s our president.
I just do the other stuff. He gets a lot of feedback. He likes feedback. Please send it to him. All the good stuff you can tell me, I’ll make sure the team gets the compliments.
But honestly, seriously, we thank you for your partnership. We thank you for being here. And with that, have a great RSA.
Operator: Please welcome chief product officer, Lee Klarich.
Lee Klarich, Chief Product Officer, Palo Alto Networks: Hello. Hello. How are you doing? Awesome. Thank you so much for joining us.
Yeah. We’re just gonna skip through the next, however much Nikesh covered. No. Don’t worry. All good.
Look. I have a recollection pretty pretty clear recollection of the first time I interacted with generative AI to all of you. Typing your your first prompt. Do you still remember what it was? Was it right?
Whatever it came back with. So for me, it took maybe, I don’t know, an hour after that experience when I started thinking,
Unidentified speaker: how
Lee Klarich, Chief Product Officer, Palo Alto Networks: are attackers going to use this cool new technology? And that led to the next question, which was who is going to benefit more, attackers or defenders? And it’s not that crazy of a question. And my guess is many of you are cybersecurity folks, you probably had similar, questions in your mind. And the reality is most new technologies benefit attackers more than defenders.
It’s just more work for all of us, and attackers get to take advantage of it. Right? And if I ask this question to all of you and I took a vote, my guess is the and I’ve I’ve had this conversation with many people since then. The prevalent assumption is that AI will benefit attackers more than defenders. I have a different point of view.
I actually believe that this is one of those technology inflections that can benefit defenders more, far more. But it takes a different approach and a different architecture. And so a lot of what I wanna share with you today is my view on how it needs to be architected and how we can achieve these outcomes. Now I’m gonna start with a stat that’s gonna question my optimism quickly, and that is we are very clearly seeing the effects of AI from attackers. Volume up 300% year over year.
That doesn’t happen by accident. Right? And interestingly, that is not even the most important stat by far. What is far more important is the speed of attacks. The time it takes an attack life cycle go from the first step to the final step, as that compresses, that is what Nikesh was referring to in terms of the need for security and all forms of security to become as close to real time as possible.
But in order to achieve that, we need a different architecture. You see the the SOC and security operations more broadly, this this is where attacks are detected, investigated, and responded to. But the architecture, of the tech of the tech stacks that that all of you have access to, you’re doing the best you can with them, but most of these architectures were built ten, fifteen, twenty years ago. These architectures were designed for a very different world, and they were not designed for real time. And so as we looked at this and we thought about what is needed, it was very clear we needed to start from the ground up.
And that became the first generation of Cortex XIM. And so the before we even get to features and anything else, that architecture and the principles of it are really threefold. It’s about data, AI, and automation. Right? And and AI perhaps being maybe the most important followed closely by automation, but AI works and by being fed data.
And so an architecture that favors siloed data, endpoint analytics for endpoint data, network analytics for network data, identity analytics for identity data, and cloud analytics, this doesn’t work. You can’t get the best from AI if you’re siloing data and you’re filtering data and you’re not collecting to the right place. And so the data and AI integration becomes super, super important. And then everywhere that we can, we bring in automation. And and, ultimately, this is what turned into XIM, the data AI foundation driving next gen SIM replacements and EDR and NDR and ITDR and CDR and all the different kinds of analytics that are needed for the SOC and, of course, SOAR and all the automation needed, but not as separate products, but as integrated capabilities on the same platform.
And Nikesh mentioned the scaling factor of this. I’ll mention one other factor, which is since we’ve launched this, we just reached a really exciting milestone, which is every XAM customer has in has embedded in the product over 10,000 different detection models. These are what our security researchers teams are building and delivering inside the product. 2,600 of these are machine learning model based, for how to detect, prioritize, and respond to attacks. To put this in context, I think the most number of correlation rules, which is the old way of doing detections, the most number that I’ve seen for any company, any stock is 800.
And those were built over the course of probably twenty years. Imagine having 10,000 at your fingertips largely based on machine learning and AI. It changes the game completely. And perhaps most exciting and rewarding for me is when we see our customers get these benefits. And what we have seen is nothing short of transformational.
We have been able to prove over and over again in companies in every different industry, every part of the world that we can go from meantime remediation from days to hours to minutes. And we can do that journey very quickly because of the power of XIAM. Now we don’t tend to rest on our laurels very often. My team is always excited about innovating, and this was no different. We were seeing that success, we started thinking about what is the next big area of expansion, and that became the second generation of XIM.
And what we focused on is the realization that everything happening in the cloud is where the action’s at. Application moving to cloud. As application move to the cloud, data moves to the cloud, data moves to the cloud, attackers move to the cloud, how do we deliver the best security solution for everything that’s going on in the cloud? And what we what we realized as we started thinking about that is the cloud is also, despite our best efforts, unfortunately, has turned into a somewhat fragmented space and and where shift left is one piece and cloud posture is another and cloud runtime is another. And then the SOC is sometimes and oftentimes not getting all the data they need to be able to do in detection, investigation, response.
And and in reality, those four components of the cloud actually need to come together. And so the the second generation of XIM was when we, earlier this year, launched Cortex Cloud, We’re able to extend the Cortex platform to every aspect of cloud security, starting with as applications are developed and everything going with AppSec to connecting that to cloud posture and connecting that to runtime and connecting all our data to the SOC. And and out of this, we’re able to do some really amazing things. First, by leveraging the openness of the Cortex platform, it allows us to integrate not only with first party but third party, security tools. So this allows us to make sure that in a heterogeneous environment, a multi cloud environment, we’re able to see and collect all of the necessary data to perform the security analytics that we need to.
Second, we’re able to apply all of the AI and automation capabilities to the cloud. What’s important? What needs to be prioritized? How do we remediate it? Can we do that automatically?
And third, as attackers are now focusing the more sophisticated attacks toward the cloud, how do we make sure that we bring the absolute best in class runtime capabilities and SOC capabilities to the cloud? Now you don’t this doesn’t mean you have to adopt all of this at once. What it shows you is how a platform approach can completely change the game designed in a modular way that allows you to on ramp depending on where you need to So that was the second generation. So what are we here today to talk about? XiM three point o, effectively our third generation of XiM.
And here, the the really interesting aspect of this is how do we take all of the data and insights and intelligence we get from the reactive side of cybersecurity or the wartime side and connect it to the proactive or the peacetime side, very much like we did with cloud, but can we now extend this across other areas of cybersecurity, leveraging all the data that we are already collecting and analyzing but now applying it for additional use cases? And so similarly, one of the first places we looked at was everything going on with vulnerability management. This is a hard space. It is an important space. And it is a space that is going through an inflection very much like the SOC, which is vulnerabilities are becoming more, relevant.
Their attackers are figuring out how to exploit them faster and faster. Data published just last week shows that about a third of new vulnerabilities that were exploited in the first quarter this year were exploited in less than twenty four hours. This can no longer be a manual human centric process. We have to bring AI and automation to vulnerability management and, more broadly, exposure management. And so that is what we have done.
We’ve designed this around, first, understanding all of the context. First party, third party, all context. How do we take that visibility, apply a set of analytics to it, specifically AI driven analytics, understand what actually matters? How do we then connect that with opportunities to provide mitigations or compensating controls in order to buy time to use automation to then drive full remediation? That is the cycle.
And that is a cycle that requires AI and automation to facilitate all of this happening in near real time. That will be the requirement. Okay? Now, hopefully, you believe all of that. We’re gonna show you what that looks like.
And for that, I’ve asked my friend Elad Koren to join me, and he’s gonna walk you through how exposure management and cortex actually will show up for all of you. Elad? Thank you very much. Thanks.
Unidentified speaker0: Hey, everyone. So thanks, Lee. I think the, the first thing that you can see from the first screen is how XIM three point o is essentially bringing into manifestation the data, the AI, and the automation pieces in XIM. So the data, the data piece itself is where you can see all the sources that we built in, the Palo Alto Networks sources along with any external sources that we can consume. This is a single pane of glass, but this is just the data.
The data itself allows us to then do many very interesting things because this is where the AI magic happens. This is where we take 1,200,000 vulnerabilities or exposures and we extract the most important ones, less than 500. Let’s understand a bit more about what happens there. This is where We stitch the assets together as part of the XIM platform.
We identify those that really require the actions from the analysts or through the automation. This is where we take only those high severity or exploitable. We clean up all the noise. Show me one organization that could address all of the things that they were handed with. No one.
This is where we want to take the most important pieces, identify those, clean them up and make sure that we can then take only less than 2%, seventeen ks out of 1,200,000 vulnerabilities. Taking that into account, we’re not stopping there. We’re not stopping at the four seventy nine cases created. We actually take it to the next level. Many of those were handed to the automation piece in Cortex XIM.
Many of the four seventy nine cases were handled in probably less than minutes. The automations, the playbooks kicked in, and everything was sorted. You can see there are more than half were actually handled and mitigated automatically. We have 13 to address, 13 out of 1,200,000. At that point, why don’t we take a look at one of them?
This one, we didn’t have permissions. As Nikesh said earlier, permissions, trust. We understand that. This is where you can see a case, one case that the analyst needed to look at where we were able to even show the entire causality chain and identify that there is actually a vulnerability, a firewall in the middle, but the firewall doesn’t have the content, the content required to address the actual vulnerability and exposure that we have there. This is where the system itself will identify the right recommendation to the analyst.
All they need to do is take the content, install that. Once that happens, the magic happens. We had a vulnerability. We had an exposure. We had the right tool in place.
It didn’t have the right content. The system identified all of that, had the full context. Next time, by the way, it will also suggest that this entire automation happens on its own without any intervention from any analyst. And this is how more than half, and later all of them, will be addressed automatically. This is what we’ve built here, and this I am very excited about what’s coming up across the enterprise.
Back to you, Lee. Thank you very much.
Lee Klarich, Chief Product Officer, Palo Alto Networks: Thank you a lot. So we’re not only thinking about peacetime and what we can do proactively. There is always more opportunities for the reactive side. How do we detect, investigate, respond faster and faster, and how do we leverage AI and automation for this? And, on that regard, the the next big thing that we are adding to XIM is the ability to apply all of our analytical capabilities and detection capabilities to email.
And, you know, for a while, email didn’t I mean, it was always important. It’s probably one of the most, you know, common, commonly used communication vehicles in every enterprise. There’s always lots of attacks. A lot of attacks are more commonplace, phishing and malware and things like that. And but over the last few years, we started to see this rise in more and more sophisticated attacks.
And as we’ve seen that rise, part of what we have realized is this requires a different approach. Not only do we have to bring AI to email, which we absolutely do, and how do we leverage generative AI to analyze emails, understand intent, and all of that context, but we have to marry that up with other content context. We have to understand what that same user looks like from identity perspective, on the endpoint, from a network connectivity perspective. The other data sources become critical context for understanding the most, sophisticated attacks, and we bring all of the network effect and threat intelligence data that we have from URLs and links and malware and files and everything else, all of that has to come together into new analytics engine. That forms the basis for detecting, for prioritizing, and importantly, for investigator responding in near real time, just like we do in all the other attack vectors.
And so, again, let’s take a look at what this looks like. Elad, why don’t you walk me through this real quick? Yeah. Thank you. Alright.
Unidentified speaker0: Perfect. Okay. Email security. First of all, why email? Why now?
If you ask somebody 68 ago or even more than that, what about email security? They’d probably tell you, yeah, in like a decade there won’t be even emails, right? The different channels of communications, actually, not only they’re here, they’re actually the number one used attack vector for many of our adversaries out there due to and thank for AI, which is why we decided to take that as an integral part of front row seat in our XIM platform, combining the, the identity, the agent, the endpoint, the network, taking the email context into our, email analytics engine. That in itself can provide us the entire context to everything that happens in the organization. So if we just take a look at the cases, well, you know what, why don’t we, if just looking at everything that we have there, looking at a a bird’s eye view, you can see even the automation kicked in here.
And more than probably 80% of the cases were addressed automatically. This is the point where we’re bringing together the entire, again, data, AI, and automation. But with those 16 manual cases, right, those are the more complex ones. The basics will be addressed automatically by the system. It’s a standard.
Right? This is this is the way to do it. This is where we can take a look at the cases, the different cases that the system will flag. These are the more complicated ones. What we’ve built here is that email analytics engine that can take all of the context, combine it together, identify the intent of the email, and then output a full investigation within Axiom.
Let’s take a look at one example case. In that one example case that we have, we have all the information available. So the analysts coming in, they can see all the context, everything that they need. They can see all the causality issues, everything that led the system to believe that this is something that needs to be looked at. Granted, ideally, the system would address everything with automation, but then I wouldn’t have an interesting demo to show you today.
So we chose something. And in this case, the email was flagged because of a sender that was poofed. But it’s not just that. The email got to three different employees. The analyst now can take a look at all the information.
They can even click through and see the full causality chain of everything that happened along the way, including the actual email. This is where the system itself will show the analyst the entire analysis of the email intent, including the links that were looked at and the email’s specific urgent terms, everything in the email itself that led the employee to then click the link, do a mistake, true. If I go back to the causality chain, what we can see is that it actually resulted for this specific demo in a malware being installed on the machine. But this is again where automation kicks in. This is where we can, through playbooks, identify any potential SSO authentication triggered after this happened and through the playbook address that immediately before there is any potential breach.
Now this automation is a recommendation. It can be triggered automatically. The assumption is that next time it will join this entire flow so it’ll be fully automated in a way that can ultimately provide that highest level of security across the enterprise with email as a front row seat of the entire enterprise, including XDR and everything. Thank you, Lee.
Lee Klarich, Chief Product Officer, Palo Alto Networks: Thanks a lot. One more area to to share before I turn you over to Anand, and he’s gonna walk you through a bunch of things around AI security. And, you know, for those of you here at RSA, you probably can’t walk around any corner without someone talking to you about agents and agentic AI is the future. Yes. I have a perspective.
And, you know, if you think about this in the context, and Nikesh used the the this notion of copilots and autopilot, and it’s a it’s a it’s a good sort of way of thinking about it, I think. And because this it’s this idea of a lot of uses of AI started off with, can I help you? Can AI help you? Here’s a prompt. Enter a question.
Get an answer. The notion of agentic AI, though, is that we’re actually going to turn AI loose to take autonomous actions on our behalf. And so there is a tremendous amount of trust that we have to build in order to actually allow that to happen. And as we’ve been experimenting with this, we’ve been building proof of concepts of this, we’ve even been building this into a number of capabilities internal to Palo Alto Networks, we’ve had lots of learnings about how to actually make this work. Most notably, though, would be that AgenTik AI is going to require a fusion of AI and automation.
And if you think about this, the the reason is the AI is probably like, we all understand the the benefits of AI, the the the creativity side of it, the creation, the ability to sort of reason, at least as far as machines can. But the challenge tends to be in the deterministic nature or the lack of deterministicness in terms of its outcomes, the predictability. But that is where automation actually, is amazing. It can be incredibly deterministic and predictable in terms of what it can do. What it lacks is the ability to create anything new.
And so by fusing these together, we believe this is how we’re going to be able to solve some of the most, challenging aspects of agentic AI. And this isn’t just theory. We are very deep into development right now, And I get to give you a sneak peek, not quite a full announcement yet, but a sneak peek into Agentyx, which is our approach to Agentyx AI for security. And so let me just give you a very quick look at this. Again, we’re we’re in development, but there’s some some very cool aspects we’re building.
Imagine, having these these, AI agents that are constantly listening to different inputs in order to understand when it needs to take action. And as it needs to take action, its ability to not only run existing plans but to actually generate new plans of action. And as it generates those new plans of action, to be able to then execute a series of tasks in order to accomplish an outcome. Now it won’t be a single agent. There’ll be specialized agents.
Different agents will be specialized for different tasks and different use cases, allowing them to to learn and improve over time and even possibly being driven by different, generative AI models because some models will be better at different things. And then where they are not able to fully carry out these tasks, there will be an ability to review. Sometimes it might be a new plan needs to reviewed by a human and approved before it’s allowed to run. In other cases, it might be the permissions aren’t fully provided in order to complete the task in which it could be reviewed. So it’s not independent of a person or human overseeing it.
It’s just that more and more is able to act autonomously. And over time, as more trust builds, more autonomy will be given. So there’ll be multiple agents. Let’s let’s take an example. Imagine the a threat intel agent.
Right? Not too hard to imagine. And has 17 actions enabled, possibly more actions to be enabled in the future, but 17 is a pretty good start for what is able to do. And imagine that this this threat intel agent has been tasked with analyzing security research blogs in real time. So anytime a new security research blog is posted, its job as an AI agent is to go analyze it and figure out, first, what does it mean?
Second, is it relevant to me? Third, have I seen it? Right? Someone just posted an article about some new attack and attacker and what it did. Was I affected?
Because this is new. I don’t know. I wanna look retrospectively. So imagine being able to go do all of that analysis. This agent is doing it autonomously using a combination of AI and automation and even possibly arriving at a conclusion that I have found a user that was compromised by this attack.
I’m going to quarantine their device and reset their password. All of that, I can actually do autonomously. Now at the end of it, I might tell somebody, hey. I just did this. You should go with the next steps of investigation, forensics, and things like that, or that might be handing off to another AI agent that’s a forensic specialist.
Right? It’s not that hard to imagine just how powerful this would be because all of a sudden, can be running continuously as opposed to requiring human interaction at every stage of the work. And so very, very cool stuff going on. I hope I hope you agree. I get really excited about all the things we build.
How we are leveraging AI to, embed in our security products to make the world safer and better for all of you. That is what we do every day. That is what gets us excited. Thank you all very much for joining us.
Unidentified speaker1: The XIM platform has been hugely efficient for our organization. We have over 6,000,000,000 events that come to the platform. We’ve got a thousand alerts that get boiled down to a handful of incidents a day. Every single one of those incidents gets touched by automation and it gets triaged and closed usually within thirty seconds.
Unidentified speaker2: We look at the promise of what we’re seeking for XIM and where we’re seeing the benefits. It’s the ability to more effectively consolidate the visibility through all that data to make sense of it in terms of distilling it, processing what matters out of it, to really detect and prevent threats in our environment.
Unidentified speaker: Using a platform like XIM and the AI tools available with it will allow us to consolidate that information, identify it, respond to it much quicker.
Unidentified speaker3: What I love about Palo Alto Networks is they’re always constantly innovating. And I know that with the three dot o version for XIM, have some great enhancements like email security and exposure management.
Operator: Please welcome senior vice president and GM, Anand Aswol.
Unidentified speaker: Alright. Good afternoon. It’s great to be here today. Look, a year ago, we talked to you about how the assessment should happening with AI transforming businesses. And we didn’t just talk about the potential.
We showed you we launched our secure AI by design portfolio. It’s really solving two broad use cases, how employees can safely access Gen AI applications and how builders can build these applications and deploy it securely. But a year is a lifetime in the world of AI. And today, I’m excited. I’m super excited to talk to you about all the amazing innovations that we’ve been working to secure your AI journey.
Now AI usages are open very rapidly. Majority of employees and organizations are using AI powered applications to get their work done more effectively, more efficiently. Now organizations would like to have complete visibility into their usage of AI applications, But most important, they want to protect their most important crown jewels, their data, from leaking out. Now majority of these AI powered applications are getting access from the browser. The browser is your new workspace, and it is the primary attack vector.
The existing consumer browsers are not well equipped to handle these advanced threats. You need a secure browser for this new era. And Pisma Access Browser is a secure Chromium based browser that you can browse, you can work, shop, chat, just like you do with your favorite browser, but with security built right into it. It’s natively integrated into the SASE architecture, allowing you to have safe and compliant usage of Gen AI applications. It’s also able to prevent advanced threats, threats and traffic that you can’t decrypt due to business reasons or technology reasons, threats and traffic that only get reassembled in the browser and are browser native, ensuring that you do not compromise on your user experience, untethering your web and SaaS applications for their maximum performance, reducing your reliance on legacy VDI infrastructure, at the same time ensuring that every application is consistently secure.
Let’s now shift gears. Let’s talk about applications your developers are building to transform your business, to give newer and better experiences to your end customers. In the last twelve months, we’ve seen LLMs go go from lab prototypes to your core tools your developers are using, customer support applications, core business applications. This transformation is now full blown. It’s super exciting if it’s adopted securely.
Let’s talk a little bit about how the app architectures have evolved. In the last decade or so, app architecture has significantly evolved. The way we write applications are very different, and the way we secure them are also very different. If you rewind the clock, traditionally, applications were built using a three tier architecture. You get a front end, you get a database, and you get the back end of the application.
Along came the cloud, giving the opportunity to organizations to modernize their application using microservices and leveraging the cloud. AI powered applications represents the third wave of application transformation. It’s not just taking an application, plugging in a model, and you’re done. You’re bringing along an entire AI ecosystem, infrastructure, models, databases. All of these various components, they talk to each other.
They talk to the outside world because AI systems give you the best answer when they behave as compounding systems, combining and transcribing the output of various sources, models, tools, plug in, datasets, to give you the most optimal answer. Now as all of these new ecosystem components come in, the attack surface increases, and you’re seeing new supply chain risks, new configuration risks, and new runtime risks. So let’s talk through it. If you look at your AI infrastructure, it’s susceptible to supply chain risks. Your developers, they could use, incorrupt corrupt machine learning libraries.
They could use insecure prompt templates. Your models, they’re susceptible to misconfigurations or vulnerabilities. And as you roll these applications into production and these various components talk to each other and they talk to the outside world. You have runtime risks. Now AI applications also accept unstructured data as input, increasing some of these risks.
Then you have your tools, your plug ins, the so called helper functions for developers. They do amazing work from translation, search, text to SQL queries, but many a times, they have excessive permissions on various parts of our ecosystem. And last but not the least, data. You take your sensitive data to train these models. Now you wanna ensure that this data doesn’t leave the organization, doesn’t get leaked out.
Now AI is getting going more and more accelerated adoption with the introduction of agents. Unlike LMs, LMs give you answers. Agents will give you action, and they plan, they act, they adapt, come back. Like Nikesh said, they have a mind of their own. The introduction of newer protocols, model context protocol, makes it easier for models to talk to external databases, tools, plug ins.
The agent to agent protocols will make it easier to coordinate across these agents. And all of this is gonna fuel the growth of AI agents. AI agents can also be built on a plethora of platforms, SaaS platforms, cloud service providers, low code, no code platforms. All of this is also affecting the app architecture, adding new capabilities for memory because agents can retrieve answers, but they can also be personalized over the long term. By nature, agents will act autonomously, so they need to have access to internal data and systems to perform those actions.
All of this, again, significantly will increase your attack surface, adding newer risks. Let’s talk about a few. First, excessive permissions. To act autonomously, agents need permissions to many of your data, to many of our systems. And in many cases, you have to solve the problem of them having excessive permissions.
Or you take memory. Attackers can poison memory to alter the behavior of agents, and agents are using variety of tools. Connected to APIs, third party plug ins, databases. You have the risk of tool misuse, and you also have the risk of identity impersonation. So what’s needed?
You see all these things happening. The attack surface is increasing. You have new types of threats coming in. When I talk to leaders, the first thing they wanna understand is visibility because you can only secure something when you see it. Every app, every agent in the system, what model is used by the application agent, what data is used to train the models, what other data sources the application and agent is connected to, the permissions.
All of these need to be solved holistically. Absolutely right. I’ll talk about five key pillars that’s needed for comprehensive AI security. First, let’s talk of model scanning. Now in traditional security, we scan code.
We scan infrastructure. So what’s different in models? The difference is that the inherent risks are in the training data being used in the model architecture, in the model behavior. So we need to make sure we do comprehensive model scanning before we can roll these applications into production. Second, posture management.
Now we must absolutely lock down misconfigurations, lock down excessive permissions, and all those data exposures holistically. Third, AI head teaming. AI systems, especially agents, they don’t just run code. They adapt. They reason.
They learn. They react. Traditional security misses these emergent behaviors where you have to understand the model intent, the model interaction, the model behavior. So we need to make sure that we are able to mimic how adversities will think when we do AI red teaming. Next, runtime security.
LL Alarms are the intelligence layer, and you must defend your models and your data from all the runtime risks. Prompt injection attacks. Attackers are using that to steal sensitive information. Malicious code generation, model DoS attacks, or your data leaks. All of this needs to be solved comprehensively.
And last but not the least, AI agent security. Now here you think of two key aspects. One is around what identities and permissions the agent have and what it can do. And second, when the agent are in action, what are real time behaviors and the runtime risks associated with that? Now the industry has responded to all of these AI security, which are a bunch of point products, a bunch of point solutions.
For each of the pillars I talked about, there are multiple point products, Different management planes, different UIs, these tools and products don’t talk to each other. They don’t share threat intelligence. This cannot work. It’s too complicated. The good thing is that there’s a better answer.
We launched Prisma AIs yesterday, the most complete, the most comprehensive AI security platform, built with best in class security technologies, a unified management for all the layers I talked about, giving you complete visibility and control on what you need to do, ensuring that you can discover your AI ecosystem, assess the risks, and protect against threats. And each of the five components of the platform are built with leading, best in class technologies. Let’s take an example, model scanning. We scan we have we run the world’s largest malware detection engine. We analyze close to 80 to a hundred million files every single day.
We’re extending that to to scan models, to look at malicious code, and other behaviors in the model. Posture management. It’s not just posture of the model. It’s posture of the network, application, agent, model, dataset, all of it done comprehensively, alerting you in real time. AI red teaming.
Our AI red teaming is done on a multi agent architecture where we learn, we act, we react. We wanna mimic exactly how an adversary will think so that we’re able to mimic real time behaviors to give you the best protection. Now we have been pioneering and working a lot on runtime security. Last year, we talked about the amazing work that Team did on all the new detections for runtime, and we’ve extended that, extended that to ensure that we have the most comprehensive runtime security across your applications, your models, your data, and your agents. We have over 27 different prompt injection techniques.
We’re preventing from model malicious code generation for models. We have a thousand plus data patterns that are pre built, programmable, to be to make sure that we can detect all of these data leaks that can happen. For agent security, we’re making sure that we can detect memory poisoning, tool misuse. So it’s the most comprehensive runtime security that we have. Now, I’ve talked to you about the various aspects of the agent.
I’ve talked to you about the various aspects of the platform. Let’s take a look at how the platform works in action, in terms of how you discover, how you assess the risk, and how you protect. So it starts with discovery. Now this is an inside out view of your entire AI ecosystem. Every user connected to every app AI application, AI agent, every model, every tool, every plug in, every data dataset.
We see your foundational models, your fine tuned models. Now this is not just an inventory. It’s how AI is flowing through your system. But discovery is just a start. Right?
It’s like me telling you, you have a leak in the house. What do you do next? Let’s talk about the risks. We have risks in posture and in runtime. The screen shows you the risks associated with posture.
Now, postures are potential risks attackers can exploit, so it’s important to understand what they are. Let’s take a look at the first one. Now, we have models that have not been scanned. They’re in preproduction yet. The applications are in preproduction.
The platform recognizes that and is asking us to scan these models. Let’s take a look and scan it. Now I scan, and, as we can see, both the models are getting scanned for any vulnerabilities, looking for malicious code, looking for model behaviors. And one of them shows red. It’s a deserialization vulnerability.
Now that could be exploited by the attacker. Now this is not just a misstep. This is a breach waiting to happen. Prisma has determined that the best action is to block this malicious model. So let’s take a click and block it.
We blocked that malicious model. Now let’s go back to our command center and see what other risks we have for posture. Now we have one more risk. It’s tied to agents. I mentioned earlier, one of the big risks with agents is around excessive permissions.
Let’s take a look at what happens with this agent. Now this is a leads agent built on Microsoft Copilot Studio. It’s accessessalesforce.com. Now the risk that it’s showing you is that this agent has excessive permissions. It can update and delete your Salesforce records.
Now that’s obviously not what you want. The platform understands the excessive permissions and is giving you a recommendation to fix those excessive permissions. So with a single click, we’re able to click it. Let’s go back to the command center. And now you can see all the posture risks have been addressed, but we’re not done.
Let’s now look at the runtime risks. Now in this case, it shows us 10 applications and two agents that we haven’t done AI red teaming or threat simulation with, and it’s it’s nudging us to start the simulation. So let’s click on it and start. In doing this threat modeling, in a typical scenario, this could take hours. For the purpose of this demo, I have short circuited that to show you what it can do.
Once the red teaming is done, we can view the results. Now, the results are very comprehensive. For every single vulnerability, it can get details. It can look at what’s happening to every single app, every single agent, and then it gives you a list of recommendations. Let’s take a look at the recommendations.
Now, this is unique. If you think of the recommendations, it’s twofold. Once it’s once first thing it’s recommending to you is the form factor that you need to deploy. And second, it’s also giving you dynamic right security policy based on best practices and based on results of red teaming. Now if it’s applications, we understand which cloud they’re running on.
We’re able to spin up the right instance in the right cloud and make sure all the systems are connected. If it’s your no code, low code platform, your agents, then we have AI runtime API that we can invoke for runtime protection. So now I can deploy errors protection. As you can see, with Protection Live, your AI infrastructure is no longer exposed. It’s actively defended.
Traffic now flows through AI runtime security, enforcing policies tailor made to your environment and aligned with the real world threats that matter to your business. Let’s go back to the command center. As you can see now, the system is working. It’s monitoring. It’s enforcing.
It’s adapting to keep your AI ecosystem secure. So as you’ve seen, with Prisma Airs, we are the industry’s most complete, the most comprehensive AI security platform. You’ve seen how the platform works. Now with this, what developers can do is deploy AI applications bravely. Next, I would like to hear for you to hear from a few customers who have been at the forefront of this AI journey.
Please view the video.
Unidentified speaker4: AgenTeq AI is really acting as an autonomous AI agent, and we wanna make sure that we provide additional controls so that we can monitor what API calls it’s going to, how they are using LLMs. What we’re excited to see with AI runtime security is the ability for us to now look and see how API calls are being made. We can look at malicious detections that are being performed by agentic AI tools across the board.
Unidentified speaker: What makes Palo Alto Networks AI runtime stand out are data security, malware security, and AI security. All the three functionality is packed into a SaaS form factor, and there are thousands of models powering the AI runtime. These models keep on getting continuously updated as new threads are detected.
Unidentified speaker: Our product like AI runtime security is not just good for Moveworks, it’s it’s absolutely essential for the industry.
Operator: Please welcome Anand Aswole and Ian Swanson.
Unidentified speaker: So as you probably saw yesterday, Palo Alto Networks announced the intent to acquire ProtectAI, a leading AI security company in the world. Excited to have Ian Swanson, cofounder and CEO of Protect. Ian, thanks for joining us. Yeah. Thank you.
So, Ian, what parts of the current AI landscape are creating the most urgent needs of security challenges as you see it when you’re talking to many customers.
Unidentified speaker5: Yeah. Talking to many customers. AI is all the buzz. Securing AI requires truly an end to end approach. Model risk assessments, robust posture management, continuous testing and red teaming for adversarial attacks, and add runtime protections for AI and agentic threats.
Add to that third party risk assessment in, let’s say, open models, it’s super clear that AI security isn’t just a one time check. It’s an ongoing process, Anand.
Unidentified speaker: Yeah. There’s a lot of buzz on agents as as, we talked about earlier as well. And as agents get deeply embedded in business processes, what are the unique challenges to make sure agents are, easy to secure along with the applications we have? We talked about new risks coming
Unidentified speaker5: with agents. So first off, AI agents are harder to secure than traditional AI applications. Why is that? Well, first, they’re autonomous. They’re dynamic.
They’re deeply embedded in business processes, fragmented ecosystems with expanding attack surfaces. Agents can initiate actions, evolve over time, and even operate without visibility or control. As AI agents move towards mass adoption, and they will, we must secure them with the same urgency as the value that they’re gonna deliver. Yeah.
Unidentified speaker: Understand. So, a lot of discussion, a lot of excitement on Protect. Yeah. What’s the combination? Like, when we when we talk to customers between what between ProtectAI and Panda Blue, what do bring to our customers?
This is
Unidentified speaker5: the really exciting part, for me. The combination of ProtectAI and Palo Alto Networks, it’s gonna create a powerful, better together opportunity for our customers and deliver a comprehensive platform for end to end AI security, from data, model artifact security, to what we talked about today, runtime and agentic AI system defenses. As enterprises scale AI, it must be safe, it must be trusted, it must be secure. There should be no AI in any enterprise without security of AI. And I truly feel that Palo Alto Networks is gonna be the trusted partner to secure AI at scale.
Alright. Thank you, Ian. Appreciate it. Thank you.
Unidentified speaker: To wrap up, we’re delivering some game changing innovations to ensure your enterprises can securely embrace AI. Prisma Access Browser allows your employees to browse bravely with security against the most sophisticated web threats. And Prisma Airs allows your developers to build and deploy bravely, knowing that every model, every agent, every data, every app is protected. No matter where you are in your AI business, we’re here to protect you. Thank you.
People
Unidentified speaker: expect us to be a leader in security. They just think that we’re smart and we’re at the forefront of security so they can sleep at night. I trust Palo Alto Networks to be smart so I can sleep at night.
Unidentified speaker: AI introduces an entirely new threat vector when it comes to securing our products through prompt injection attacks, through content moderation, through hallucinations, and a bunch of these other things that did not exist in
Unidentified speaker: a world pre AI. Customers are excited that Palo Alto is leading the way in AI security because it protects both the ends of the spectrum. The user interacting with the JN AI application and the JN AI application interacting with the LLMs.
Unidentified speaker: Not only do we need a new generation of tools, what’s really important is unification of data and a single pane of glass.
Unidentified speaker: Having a partner like Palo Alto Networks allows us to grow with the business. It allows us to deliver tools to the business that will enable them and help them to use the latest technologies, but use it safely and securely.
Operator: Please welcome back Kelly Walder.
Kelly Walder, CMO, Palo Alto Networks: Well, the innovations that we’ve shared today were forged through deep conversations with customers like you and the relentless curiosity about where the future is headed. And we’re excited for you to be part of this journey, So we invite you to dig deeper with our team, ask the hard questions, and envision how Palo Alto Networks can become your cybersecurity partner of choice. Because innovation isn’t just about a moment. It’s about momentum, And today is just the beginning as we set our sights on tomorrow. So I want to thank you all so much for being here.
If you’re one of the 50,000 plus people that are joining us virtually, please snap the QR code behind me here and connect with us so we can go deeper. And if you’re here in San Francisco, we look forward to continuing the conversation this week at the Hotel, Canopy where you can see working demos in action, talk with our team, and experience what we have in store today and tomorrow. Thank you all again, and we hope to see you soon.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.