United Homes Group stock plunges after Nikki Haley, directors resign
On Tuesday, 09 September 2025, Google Cloud’s CEO Thomas Kurian presented at the Goldman Sachs Communicopia + Technology Conference 2025, offering a strategic overview of the company’s AI-driven growth trajectory. Despite the early stages of cloud adoption, Google Cloud (NASDAQ:GOOG) is capitalizing on AI solutions to drive customer engagement and expand its market reach, while also addressing challenges in competition and operational efficiency.
Key Takeaways
- Google Cloud’s AI focus has led to a 28% sequential quarter-over-quarter growth in new customer wins.
- The remaining performance obligation (RPO) stands at $106 billion, with over 50% expected to convert to revenue in the next two years.
- AI users consume 1.5 times more Google Cloud products than non-AI users.
- Google Cloud is investing in building its own chips and models to optimize costs and improve margins.
- The company is enhancing operating margins through capital efficiency and improved go-to-market strategies.
Financial Results
- RPO and Revenue Growth: Google Cloud’s RPO is $106 billion, growing faster than revenue, with significant conversion expected in the next two years.
- Customer Growth: The company has achieved a 28% growth in new customer wins in the first half of the year, driven by its AI offerings.
- Margin Improvements: Focused on operating discipline, Google Cloud is enhancing margins through capital efficiency and engineering productivity.
Operational Updates
- AI Infrastructure: Google Cloud’s infrastructure is optimized for high-performance AI applications, boasting twice the power efficiency of competitors and a 50% performance advantage.
- AI Models and Data Cloud: The Gemini suite of models is widely adopted, with a significant increase in data processing in BigQuery.
- AI Agents: The Gemini command-line interface AI agent has grown rapidly, facilitating 5 billion commerce transactions through its commerce agent.
Future Outlook
- Strategic Investments: Google Cloud will continue investing in AI infrastructure, building proprietary chips, and optimizing power usage effectiveness.
- Geographical Expansion: The company is addressing the demand for inference capabilities across various regions, driven by sovereignty considerations.
- AI Deployment: New AI model revisions are expected to unlock new capabilities every six months, focusing on digital products, customer service transformation, and IT department enhancements.
Q&A Highlights
- Adoption Trends: Cloud adoption is still in its infancy, with a majority of servers and applications remaining on-premise.
- Investment Priorities: Google Cloud is prioritizing investments in data centers, power, and product specialization to cater to specific customer segments and domains.
In conclusion, Google Cloud’s presentation at the conference underscored its strategic focus on AI to drive growth and efficiency. For more detailed insights, please refer to the full transcript below.
Full transcript - Goldman Sachs Communicopia + Technology Conference 2025:
Eric, Interviewer: Okay. Thanks, everyone, for getting settled. Our next presentation and conversation is going to be with Alphabet, with Thomas Kurian, CEO of Google Cloud. I’m going to start with a safe harbor, give a little bit of Thomas’s background, bring Thomas up, and he’s going to go through some slides, and then we’re going to have a conversation. Some of the statements that Mr. Kurian may make today can be considered forward-looking. These statements involve a number of risks and uncertainties that could cause actual results to differ materially. Please refer to Alphabet’s Forms 10-K and 10-Q, including the risk factors discussed in its Form 10-K filing. Any forward-looking statements that Mr. Kurian makes are based on assumptions as of today, and Alphabet undertakes no obligation to update them. Thomas Kurian joined Google Cloud as CEO in November of 2018, bringing deep enterprise experience to the company.
He has grown the business into one of the world’s largest public clouds with more than a $50 billion annual revenue run rate. Thomas is here for the third year in a row. Thomas, welcome to the conference, and thanks for coming again.
Thomas Kurian, CEO, Google Cloud: Thank you. Thank you, Eric. Thank you all for having me. You know, cloud computing continues to grow as the primary vehicle through which enterprises deploy their core information technology systems. Cloud, although it has grown, is still in its early phase because a lot of machines and applications still run on-premise and have not yet moved. Despite our growth, we see a lot of opportunity ahead. Over the last two years, organizations are changing in many industries who they choose as their cloud partner. In the past, they were focused on application hosting or web hosting. Increasingly, they’re looking at who can bring the technology and solutions to help them transform their business with artificial intelligence applied in different domains in their organization. At Google Cloud, we have deep product differentiation because of years of work in AI.
As a result of that product differentiation, we’re capturing new customers faster, deepening our relationship with existing customers, and growing our total addressable market. Why are we winning? We provide deep product differentiation in performance, cost, reliability, and efficiency in AI infrastructure. Second, we provide deep differentiation by offering a leading suite of best-in-class generative AI models. In order to feed these models, we use our strengths and historical strengths in data processing, analytics, and security to feed models with high-quality data and keep them safe. Finally, for many years now, we’ve been building domain-specific AI applications and agents, and that work is now seeing a lot of interest from customers. Starting with AI infrastructure, we’ve introduced chips for many years. We’re in the 11th year now building AI systems and chips. Our AI systems are optimized for high-performance, highly reliable, and scalable training, as well as for inference.
For example, if you’re running a large-scale cluster, we have two times the power efficiency, meaning you get two times the flops per watt. With power now, this scarce resource, you get a lot more capacity. We’re typically seeing about a 50% performance delta between us and other players. If you look at the total capacity you can get on a single system, you can get 118 times more throughput through our systems than you can from the next player. In addition to that, we’ve integrated high-performance storage for AI-specific storage that can be used to scale out a cluster much more efficiently. If you’re doing inference, offer incredibly low latency. Our years of work in storage optimization have now seen lots of interest. We’ve seen a 37x increase in the volume of data being used in our AI-optimized storage.
We connect all of this with very high bandwidth optical networking. The value of optical networking is you can dynamically change the configuration of a cluster so that you can slice it up differently if you want it for training and inference without taking an outage, which is hugely important for labs as they see demand shifting from training workloads to inference. Google has been at the forefront of most of the software that people are using for training. For example, compilers like JAX, XLA, Pathways, and all this software expertise allows us to optimize the stack. We see demand from four customer segments. First is the AI labs, nine of the 10 leading AI labs in the world. If you take the 10 largest, nine of them are our customers. We see demand from traditional enterprises who are deploying AI models. We’re seeing demand in capital markets.
As capital markets shift from using classical computation for algorithms and are shifting to use inference, the same systems we offer can be used to provide very high-frequency calculations. We’re seeing interest in high-performance computing applications. SSI, a leading lab, is a customer. LG Electronics and LG AI found both performance and cost benefits in using our infrastructure. On this infrastructure, we offer a suite of models, not just Alphabet’s, but 182 leading models from the industry. Our own models fall in four categories. We offer leading models for large-scale generative AI applications, Gemini. Gemini leads in many dimensions: performance, cost, quality, factuality, the ability to do very sophisticated kinds of reasoning. It’s used by 9 million developers to build applications. Just to give you a sense, compared to 1.5, which we launched in January of this year, 2.5, our latest model, reached a trillion tokens 20x as fast.
We’re seeing large-scale adoption of Gemini by the developer community. In addition to that, we offer a leading suite of what’s called diffusion models, diffusion models that create images, video, audio, speech, etc. We’ve added a third set of models around scientific computation. For example, our time series model is used by many firms in financial services to do numerical prediction of sequences. Molecular design, we offer a model to help people design molecules, which is getting a lot of interest in the pharmaceutical industry. There’s a whole range of models. As people switch from just using a raw model to building an agent, we’ve introduced, based on our history in leading many open-source projects, something called Agent Development Kit, which is a platform to help people build agents. It is by far the leading agent development platform in the industry, supported by over 120 companies.
To give you a sense of the scale, if you compare us to other hyperscalers, we are the only hyperscaler that offers our own systems and our own models. We’re not just reselling other people’s stuff. The volume of tokens we process is twice other providers in half the time, so roughly four times the volume. We have a lot of different companies using these AI models, from companies creating digital products to using AI within their organization. Canva is an example of a company using our diffusion models to create image and video content. ServiceNow is one of many SaaS companies that use our model, Gemini.
The reason they’re using it is not only does it give them great performance and quality and latency, but it can also be deployed in four different configurations: in a cloud, in a classified environment, out on the edge, and also now on top of any NVIDIA cluster, which in the past, if you wanted to run a model in your data center, you had to use an open-source model because you had to give up the weights of it. We’re the only ones that offer that as well. When you use models, you need to feed them with high-quality data. As you put more and more of your company’s information into the model, you need to keep the model secure. Our history and expertise in large-scale data platforms has helped us, as well as our focus in building security products.
We allow people to migrate data, clean it, prepare it, and feed it into the models using our data cloud. Second, we provide incredible low-latency connectivity between our analytic and database platforms and models running on our cloud, allowing people to use models to process information from our data platforms. Third, as people want to understand data and use models to reason on this data, we’ve introduced new data science and conversational analytical agents. Think of it as vibe coding with your data. It is much easier for anybody to ask questions in natural language and do data analysis and also create data science models. All that is driving growth in our data platforms. To give you a sense, we’ve seen a 27-times increase in the volume of data processed in our data cloud, BigQuery, with Gemini.
We’ve seen that BigQuery, which normally, when people think of data warehouses, they think of things that handle numbers and tables, now is also being used to store and process unstructured data. We have many more customers than some of the pure-play providers. Our strength in security is now applied to AI models. We protect your data. We have new solutions to protect models themselves so that when you load your data into a model, you do not get compromised of the model. Third, we also protect organizations with new advances that we’ve introduced from threats introduced using AI models to attack systems. All that has driven growth with a lot of different customers, from regulated industries to commercial enterprises to small businesses.
Two quick examples: Radisson Hotels took all their data for customer segmentation and all data from their hotels consolidated in our data cloud and used Gemini and our diffusion models to create advertising. Virgin Media is using the same combination but using it to improve the speed of decision-making and data engineering within their organization. Lastly, we started our work to build domain-specific enterprise agents in 2021. We’ve been working on it for four years now. We’re focused in five areas: agents to help software engineers write code. Our Gemini command-line interface AI agent, which we introduced on June 24, has grown to close to a million users already. We allow people to rebuild domain-specific AI agents, for example, for marketers to create content, customer service teams to handle customer service interactions.
We’ve seen strong growth, for example, in our customer service technology, with a 10-times growth in chat and voice interactions. We’re also building domain-specific agents for specific industries, for example, to help people do shopping and commerce. Today, we handle roughly 5 billion commerce transactions through our commerce agent. We make all of these agents, as well as any bespoke one that people want to build, available to a single platform we call Agent Space, which provides a single panel for a company to access and use all of the AI technology within their organizations. We’re seeing growth and broadening of our addressable market by applying AI now in domains that IT departments historically didn’t serve: marketing, customer service, commerce, etc. Mercado Libre is one of the largest e-commerce systems in Latin America. They use our shopping and commerce technology.
Wells Fargo uses Agent Space to help their employees use AI from trade management, contract management, and other domains. Our deep product differentiation has driven the growth that we’re seeing in customers. Now, how are we taking all this to market? There are five important things. First of all, we monetize AI in five different ways. We’re seeing growth from net new customers. We’re seeing a deeper relationship with existing customers. We’re broadening our addressable market. As a result of that, we’re seeing growth in revenue, our remaining performance obligations, our backlog, and operating margin. The five ways we monetize AI: some people pay us for some of our products by consumption. If you use our AI infrastructure, whether it’s a GPU, TPU, or use our model, you pay by token, meaning you pay by what you use. Some of our products people pay for by subscription.
You pay a per-user-per-month fee, for example, Agent Space or Google Workspace. Some monetization comes by increased product usage. If you use our cybersecurity agent and you run threat analysis using AI, we’ve seen huge growth in that. For example, we’re over $1.5 billion threat hunts, we call it, using Gemini. That drives more usage of our security platform. Similarly, we see growth in our data cloud. We also monetize some of our products through value-based pricing.
For example, some people use our customer service system, say, "I want to pay for it by deflection rates that you deliver." Some people use our creative tools to create content, say, "I want to pay based on what conversion I’m seeing in my advertising system." Finally, we also upsell people as they use more of it from one version to another because we have higher-quality models, more quota, and other things in higher-priced tiers. Because of this, we’re capturing new customers faster. As I said, we’ve seen 28% sequential quarter-over-quarter growth in new customer wins in the first half of this year. Nine of the top 10 AI labs and nearly all the AI unicorns are our customers. We’re deepening our relationship with existing customers. 65% of our customers are already using our AI tools in a meaningful way.
Those customers that use our AI tools typically end up using more of our products. For example, they use our data platform or our security tools. On average, those that use our AI products use 1.5 times as many products than those that are not yet using our AI tools. That leads customers who sign a commitment or a contract to over-attain it, meaning they spend more than they contracted for, which drives more revenue growth. Finally, we’re growing and diversifying our revenue. Our revenue does not come from a single product line. We have many different product lines, all of them growing. As Sundar and Anat, our CFO, have both mentioned, we’ve made billions using AI already. We’re growing revenue while bringing operating discipline and efficiency. Our remaining performance obligation, or backlog, as it’s sometimes referred to, is now at $106 billion. It is growing faster than our revenue.
More than 50% of it will convert to revenue over the next two years. Not only are we growing revenue, but we’re also growing our remaining performance obligation. We’re also very focused on operating discipline to improve operating margins. The three big areas of focus, one is making sure we’re super efficient from the point of view of using our fleet and our machines so that we get capital efficiency. There are many hundreds of projects that people have done to optimize. The larger the fleet, generally, the more efficient you get because you need less buffer for any individual customer. You’ve also seen a study that some of our scientists published on the improvements in inferencing that we’ve done with a 33-times efficiency in inference using some of our models over the last year. There is a lot of focus on continuing to optimize our fleet.
We’re improving our go-to-market organization, which now has a large customer base to sell to. Selling to existing customers is always easier than selling to new customers. It helps us improve the cost of sales as a percentage of revenue. We’re also building on a large suite of products already, so it helps us improve our engineering productivity. You see that in our results. We’re growing top line and operating income. In closing, we’ve spent years building advanced AI technology of our own: chips, systems, tools, agents. We made those bets very early. Much of the work that you see today has been underway for many, many years. We’re not just reselling third-party technology. Why we’re winning is because we see this deep product differentiation now being adopted by customers. That’s leading us to win new customers, deepen our relationship with existing customers, and broaden our addressable market.
In turn, that’s leading us to grow revenue and operating income. Thank you.
Eric, Interviewer: Thank you, Thomas. Thanks so much for a lot of good stuff in there. I want to come back to where you started the presentation, talking about the state of the industry today. As we exit 2025, we’re looking towards 2026. Where are we in terms of cloud adoption, client usage trends, and how is Google Cloud evolving in terms of that competitive landscape and those secular growth themes?
Thomas Kurian, CEO, Google Cloud: Cloud adoption is still in its early stages. If you count servers, depending on which analyst you read, still a vast majority of servers and apps run on-premise in people’s data centers. There’s a lot of remaining opportunity ahead for people to migrate these workloads, to modernize them, to transform them. There are different adoption patterns we see in different industries. Some are moving more quickly. Some, for example, government agencies, some of them move a bit slower because of compliance and other regulation. Europe has been generally slower to move because of sovereign cloud requirements. We’ve now introduced them. We are starting to see many different drivers for people to pick that up.
In the past, people chose cloud primarily as a mechanism to get developer efficiency, meaning I can get infrastructure on demand and to host applications and to save money in hosting applications by consolidating compute and storage. That continues to be important, but that’s not the primary driver. The big driver now is, I really want to transform my organization. Can you help me by bringing AI expertise and products to help me?
Eric, Interviewer: With that as a jumping-off point, when you sit and look at the enterprise landscape today and the way enterprises are adopting AI, put a finer point on your presentation in terms of how those trends inform your strategic priorities as a company.
Thomas Kurian, CEO, Google Cloud: We see organizations using AI in four domains. Some companies are using it to build digital products: Natura Cosmetics, SNAP, the work we did with Warner Brothers to create The Wizard of Oz. Those are all essentially using AI to advance a digital product. Others are using it to transform customer service. When I say transform customer service, not just in the call center as we do with Verizon, but at the point of sale as we do with Wendy’s, in a vehicle as we’re showing with Mercedes today in Munich. There are many different places where people see that customer interface transformation. Others are using it to streamline the core of the company and their back office. When I say the back office, Home Depot is using it to answer HR help desk.
When employees ask questions regarding benefits and other things, they’re using our agent to answer those questions. AES, which is a large energy company, streamlined their regulatory and audit process, reducing the cycle time. Tyson Foods is using it in supply chain. Finally, we’ve seen a lot of organizations now using it in their IT departments. In their IT departments, broad brush, there’s people using it to write code, and not just to write code, but improve the quality of code that’s being written. There are people who are using it for cyber because cyber generally, there’s a bottleneck in terms of how many cyber analysts you have. These AI tools can be used to both help you identify and prioritize what threats are occurring and then much more quickly analyze if you’ve been compromised. Those are the four big domains that we see AI being adopted for.
Eric, Interviewer: OK. When you think about your full-stack approach to AI, talk to us a little bit about how that might be creating competitive advantages in the marketplace, and how does it help you translate into winning deals.
Thomas Kurian, CEO, Google Cloud: It’s a great question. Our stack is open, meaning we offer our own accelerators. We have a super close working relationship with NVIDIA because people want a choice of different types of configurations of systems. Same thing with our models. We offer our own as well as third party. What it helps us do, though, is we can optimize things differently up and down. I’ll give you an example. If you look at the work we do with capital markets, applying AI to synthesize data from information sources and then use it to actually feed algorithmic models, you need a certain set of skills to reason on it. You need a certain set of capability to choose the right tool. You need to be able to do it with ultra-low latency.
That combination of things that we bring from the enterprise, surprisingly, at the model level, it’s the same thing you need to have a great coding tool. It turns out if you want to do software engineering, you have to choose the right tool for the right task. You want to be able to generate code with low latency, so when you do auto-completion, it happens. It’s also the same thing that applies in certain circumstances on the search side. The fact that we’re able to get all of these different design centers and we’re using one model series for all of Alphabet, as well as our customers, helps improve the model. Because we’re optimizing that model up and down, and as we had Jeff Dean and our team talk about how much more efficient we’ve become on serving, it also helps us optimize the cost of inference and serving.
We can co-design things. We get leverage because of all of the domains we’re serving, both across enterprise and the consumer side. We can also optimize the cost structure when we deliver these things.
Eric, Interviewer: OK. Building on that theme in your presentation, you touched upon the idea of your AI infrastructure and building advantage and scale around that. Talk a little bit about where custom silicon and TPUs make sense as opposed to working with external suppliers. Talk a little bit about some of the key learnings of customers that have used TPUs and the use cases where they deploy them.
Thomas Kurian, CEO, Google Cloud: Broad brush, I think when people look at models, they think there’s one type of model. There are many different types of models. There’s dense models, mixture of experts, sparse models. Do you need a sparse core or not? We offer a range of accelerators where people really choose the right thing for their model based on a variety of factors. It often comes down to the experts sitting down and actually trying it. We see four key things. First one is, are you doing a kind of hero model run? If you’re running a hero model run, it’s typically on a giant cluster that you want to scale out. They care a lot on the flops per dollar, meaning how many flops are you getting per dollar? How efficient are you able to load your data set into memory? How much HBM do you have?
Are all the nodes in the cluster communicating with super predictable latency, which is where the optical network comes in? Can you then use certain things like the compiler to really optimize what at the bottom level is the equivalent of an instruction set? The TPU is seen as really attractive by many of the leading labs because it gets their training runs to get much more throughput through the system. It’s also being used by lots of people to inference. We have very close working relationships with NVIDIA to allow customers to train on TPUs, serve on GPU, or vice versa. There’s a lot of things we’ve optimized with NVIDIA. For example, JAX is optimized not just on TPU but GPU. It’s not just the infrastructure but the entire software layer on top.
Eric, Interviewer: Got it. Understood. You laid out a lot of initiatives on the product side, the platform side, all leading to the types of growth we’re seeing today. What are the biggest priorities for investments in the business in support of that growth? How do you think about sort of getting that mix right between striking the right balance on investments and driving growth?
Thomas Kurian, CEO, Google Cloud: We look at investments in three big categories. Obviously, our supply chain and capital investments, which span data centers, power, long-term power contracts, what we’re doing with our different geographical locations because inference now needs to be in many different countries for sovereignty reasons. One is our capital infrastructure. We’ve had a team for years and years do that at real, enormous scale. We continue to do that, and we’re very thoughtful on how we’re doing it. In each area, we also look at how do we get more efficient. For example, we’re constantly optimizing. One example is, as you get these more powerful chips, they also take a lot more power, and power is, in many cases, a short resource. We have the most efficient PUE in the world. PUE is how much power are you consuming to generate x amount of flops.
We invested very early in water cooling, and water cooling gives you another lift in throughput through these systems. That’s an example of where we said, hey, there’s likely to be a power issue. Let’s design early a set of solutions, and that’s helped us with an advantage there. We invest in products, so we are very thoughtful on how and discipline and which domains are we solving and how much investment do we want to make in products. We invest in our go-to-market organization. Go-to-market organization, you know, when I started at Google, nobody thought we’d be where we are. In the first several years, almost all our sales were to brand new customers. Difficult to win them, but we’ve actually won many of them. Now we have teams that know how to sell specialization for specific products. We know how to sell to existing customers.
We know we have a different model to sell to new customers. All that sophistication has been built over many years.
Eric, Interviewer: You also talked in your slides about how the margins continue to build in the reported segment behind Google Cloud. Talk to us a little bit about not only just getting that right on the growth side, but you talked a little bit about it driving efficiencies and continuing to make progress on the margin side of the business over the long term as well.
Thomas Kurian, CEO, Google Cloud: There’s a lot of people working really hard to continue to improve top line and operating margin. Some of it is down to really fundamental things. Like if you look at us, we made some decisions early to say we’re going to build our own chips, our own models, and also products around the models. That gives us, when you’re not just distributing somebody else’s stuff, you obviously can optimize cost and improve margins. Even when we look at examples of products we built around the model, in 2021, we saw a lot of companies talking to us about their call centers were shut down because of COVID. They could not handle the volumes of calls coming in. We said, let’s build an AI-powered customer service system. That’s now being used at large scale. That’s an example of something that’s a very differentiated product.
It’s not just here’s a model and access it through an API. There’s a lot of capability we’ve built into that. Those decisions that we made early and years of continued effort, both in the past and in the future, we’re very focused on that, both improving top line and operating income.
Eric, Interviewer: OK. I’ll try to squeeze one last one in. When you think about the deployment of AI that’s happening right now in the ecosystem, and you look at the infrastructure layer, the model layer, the application layer, where are you seeing the most exciting things being deployed that can be elements of driving growth in your business over the medium to long term?
Thomas Kurian, CEO, Google Cloud: I think we see a lot of interest in it. It’s sort of we are roughly in every six-month cycle. What I mean by that is we find that a major model revision opens up an entire new category of capability. That, in turn, drives us to build value-added products on top of it. It’s roughly we’re in a six-month iteration. For instance, if you look at Veil, Veil 3 is an amazing video creation system. We now have enormous interest from advertising companies, creative labs, media companies, movie studios, et cetera. That market did not exist prior to Veil reaching that level of breakthrough. We take that, and because we’re co-engineering it with Google DeepMind, we’re able to build an entire set of assets around it as product that people can then use to apply it to specific domains. That’s roughly the cycle we’re on. It may get faster.
A lot of it depends on what kind of breakthroughs we’re working on.
Eric, Interviewer: No, great example. I think we’re going to have to leave it there. Thomas, thank you so much for coming to the conference this year. Please join me in thanking Thomas and the Alphabet team for being part of the conference.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.