Earnings call transcript: IREN Ltd's Q1 2025 sees stock drop despite AI gains

Published 07/11/2025, 00:42
Earnings call transcript: IREN Ltd's Q1 2025 sees stock drop despite AI gains

IREN Ltd reported its earnings for the first quarter of fiscal year 2025, highlighting a substantial revenue increase and strategic advancements in AI cloud services. Despite these positive developments, the company's stock fell by 12.37% in aftermarket trading, reflecting investor concerns over increased operating expenses and lack of explicit earnings beat.

Key Takeaways

  • IREN Ltd reported a fifth consecutive quarter of record revenues.
  • The company secured a $9.7 billion AI cloud contract with Microsoft.
  • Stock price fell 12.37% in aftermarket trading.
  • Operating expenses rose due to higher depreciation and share-based payments.

Company Performance

IREN Ltd continues to demonstrate strong performance, with Q1 FY26 revenue reaching $240 million, marking a 28% increase quarter-over-quarter and a 355% increase year-over-year. The company attributes this growth to robust demand for AI cloud services and strategic partnerships, such as the significant contract with Microsoft.

Financial Highlights

  • Revenue: $240 million (↑28% QoQ, ↑355% YoY)
  • Adjusted EBITDA: $92 million
  • Operating expenses increased due to higher depreciation and share-based payments.

Market Reaction

In the aftermath of the earnings call, IREN Ltd's stock price fell by 12.37% to $75.74. This decline occurred despite the company reporting record revenue and a major AI cloud contract, suggesting investor concerns over rising costs and potential earnings expectations not being met.

Outlook & Guidance

IREN Ltd is targeting an annualized run rate revenue of $3.4 billion by the end of 2026. The company plans to deploy an additional 40,000 GPUs in Canada and is focused on capital-efficient expansion. The strategic initiatives aim to capitalize on the strong demand for AI cloud services and address the supply-demand imbalance in the market.

Executive Commentary

"We control the entire stack from the substation all the way down to the GPU," stated Daniel Roberts, Co-CEO, emphasizing the company's vertically integrated model. CFO Anthony Lewis highlighted the financial outlook, noting, "We expect an unlevered IRR of low double digits."

Risks and Challenges

  • Rising operating expenses could impact profitability.
  • The supply-demand imbalance in AI infrastructure may pose challenges.
  • Execution risks associated with large-scale expansion plans.
  • Potential dependency on key contracts like the one with Microsoft.

Q&A

During the earnings call, analysts focused on the economics of the Microsoft contract, seeking clarity on the company's preference for AI cloud services over colocation. Executives also addressed questions about the future-proofing of data center infrastructure and the potential for strong internal rates of return.

Full transcript - IREN Ltd (IREN) Q1 2026:

Operator: Thank you for standing by, and welcome to the IREN Q1 FY26 results briefing. If you wish to ask a question, you will need to press the star key followed by the number one on your telephone keypad. I would now like to hand the conference over to Mike Power, VP of Investor Relations. Please go ahead.

Mike Power, VP of Investor Relations, IREN: Thank you, Operator. Good afternoon and welcome to IREN's Q1 FY26 results presentation. I'm Mike Power, VP of Investor Relations, and with me on the call today are Daniel Roberts, co-founder and co-CEO; Anthony Lewis, CFO; and Kent Draper, Chief Commercial Officer. Before we begin, please note this call is being webcast live with a presentation. For those that have dialed in via phone, you can elect to ask a question via the moderator after our prepared remarks. Before we begin, I'd like to remind you that certain statements that we make during the conference call may constitute forward-looking statements, and IREN cautions listeners that forward-looking information and statements are based on certain assumptions and risk factors that could cause actual results to differ materially from the expectations of the company.

Listeners should not place undue reliance on forward-looking information or statements, and I'd encourage you to refer to the disclaimer on slide two of the accompanying presentation for more information. With that, I'll now turn over the call to Daniel Roberts.

Daniel Roberts, Co-Founder and Co-CEO, IREN: Thanks, Mike, and thank you all for joining us for IREN's Q1 2026 earnings call. Today, we'll provide an overview of our financial results for the first fiscal quarter ending September 30, 2025, highlighting key operational milestones, and importantly, discuss how our AI cloud strategy is driving strong growth. We'll then open the call for questions at the end. Q1 FY26 results, fiscal year 2026 is off to a really good start. We delivered a fifth consecutive quarterly increase in revenues and a strong bottom line. Revenue reached $240 million and adjusted EBITDA was $92 million. Noting, of course, that net income and EBITDA, importantly, reflected an unrealized financial gain on financial instruments. This performance reflects our continued disciplined execution, along with the benefits of having a resilient, vertically integrated platform. Microsoft and the cloud contract.

Earlier this week, we announced a $9.7 billion AI cloud contract with Microsoft, which was a defining milestone for our business that underscores the strength and scalability of our vertically integrated AI cloud platform. The agreement not only validates our position as a trusted provider of AI cloud service, but also opens up access to a new customer segment among the global hyperscalers. Under this five-year contract, IREN will deploy NVIDIA GB300 GPUs across 200 megawatts of data centers at our Childress campus. The agreement includes a 20% upfront prepayment, which helps support capital expenditures as they become due through 2026. The contract is expected to generate approximately $1.94 billion in annual recurring revenue. Beyond the obvious positive financial impact, the contract carries strategic value of significance for us.

It not only positions IREN as a contributor towards Microsoft's AI roadmap, but also demonstrates to the market our ability to serve an expanded customer base, which includes a range of model developers, AI enterprises, and now one of the largest technology companies on the planet. As enterprises and other hyperscalers accelerate their AI buildout, we expect that our combination of power, AI cloud experience, and execution capability will continue to position us as a partner of choice. Looking ahead, we're executing now on a plan that will see our GPU fleet scale from 23,000 GPUs today up to 140,000 GPUs by the end of 2026. When fully deployed, this expansion is expected to support in the order of $3.4 billion in annualized run rate revenue. Importantly, this expansion leverages just 16% of our three gigawatts in secured power, leaving ample capacity for future expansion.

With that overview in mind, let's turn to the next section, a closer look at our AI cloud platform and how we're positioned to scale in the years ahead. As I alluded to earlier, a key driver of IREN's competitive advantage in AI cloud services is our vertical integration. We develop our own greenfield sites, engineer our own high-voltage infrastructure, build and operate our own data centers, and deploy our own GPUs. Simply put, we control the entire stack from the substation all the way down to the GPU. We believe strongly that this end-to-end integration and control is a key differentiator that positions us for significant growth. This model of vertical integration eliminates dependence on third-party colocation providers and, most importantly, removes all counterparty risk associated. This allows us to commission GPU deployments faster with full control over execution and uptime.

For our customers, this translates into scalability, cost efficiency, and a superior customer service with tighter control over performance reliability and delivery milestones, driving tangible value and certainty. For those reasons, our customers, including Microsoft, view IREN as a strategic partner in delivering cutting-edge AI compute, recognizing our deep expertise in designing, building, and operating a fully integrated AI cloud platform. On that note, we're excited to announce a further expansion of our AI cloud service, targeting a total of 140,000 GPUs by the end of 2026. This next phase includes the deployment of an additional 40,000 GPUs across our McKenzie and Canal Flats campuses, which are expected to generate in the order of $1 billion in additional ARR. When combined with the $1.9 billion expected from the Microsoft contract and $500 million from our existing 23,000 GPU deployment, this expansion provides a clear pathway to approximately $3.4 billion.

In total annualized run rate revenue once fully ramped. Importantly, this incremental 40,000 GPU buildout will be executed in a highly capital-efficient manner through leveraging existing data centers. While we have not yet purchased GPUs for the deployment, we continue to see strong demand for air-cooled variants of NVIDIA's Blackwell GPUs, including both the B200 and the B300. Given their efficient deployment profile, we expect these to form the basis of this expansion. That said, we will continue to monitor customer demand closely and pursue growth in a disciplined, measured way. This full expansion to 140,000 GPUs will only require about 460 megawatts of power, representing roughly 16% of our total secured power portfolio. This leaves substantial optionality for future growth and, importantly, continued scalability across our portfolio.

The key takeaway here is that we have substantial near-term growth being actively executed upon, but also have significant and additional organic growth ahead of us. Turning now to slide eight, which highlights the British Columbia data centers supporting our expansion to 140,000 GPUs. At Prince George, our ASIC-to-GPU swap-out program is progressing well. The same process will soon extend to our McKenzie and Canal Flats campuses, where we expect to migrate ASICs to GPUs with similar efficiency and speed. Together, these sites are allowing us to fast-track our growth in supporting high-performance AI workloads, scaling it into what is becoming one of the largest GPU fleets in North America. Turning to Childress, where we are now accelerating the construction of Horizons 1-4 to accommodate the phase delivery of NVIDIA GB300 NVL72 systems for Microsoft.

We've significantly enhanced our original design specifications to meet hyperscale requirements and also further ensure durable long-term returns from our data center assets. The facilities have been engineered to Tier 3 equivalent standards for concurrent maintainability, ensuring continuous operations even during maintenance windows. A key feature of this next phase is the establishment of a network core architecture capable of supporting single 100-megawatt superclusters. A unique configuration that enables high-performance AI training for both current and next-generation GPUs. We're also incorporating flexible rack densities ranging from 130-200 kilowatts per rack, which allows us to accommodate future chip generations and the evolving power and density requirements without major structural upgrades. While these design enhancements have resulted in incremental cost increases, they provide long-term value protection, enabling our data centers to support multiple generations and reduce recontracting risk typically associated with lower spec builds.

In short, we're building Childress not just for today's GPUs and the Microsoft contract in front of us, but also for the next generations of AI compute. Beyond the accelerated development of Horizons 1 through to 4, the remaining 450 megawatts, as you can see in the image on screen, of secured power at Childress provides substantial expansion potential for future Horizons, numbered 5 through to 10. Design work is underway to enable liquid-cooled GPU deployments across the entire site, positioning us to scale seamlessly alongside customer demand. Finally, turning to Sweetwater, our flagship data center hub in West Texas, which has been somewhat overshadowed in recent months by the activity in Childress and Canada. At full buildout, Sweetwater will support up to 2 gigawatts, 2,000 megawatts of gross capacity, all of which has been secured from the grid.

As shown in the chart, this single hub rivals and, in most cases, exceeds the entire scale of total data center markets today. While the recent headlines have naturally been dominated more about our AI cloud expansion at other sites, Sweetwater is a pretty exciting platform asset, giving us the capability to continue servicing the wave of AI compute demand. Sweetwater 1 energization continues to remain on schedule, with more than 100 people mobilized on site to support construction of what is becoming one of the largest high-voltage data center substations in the United States. All exciting stuff. With that, I'll now hand over to Anthony, who will walk through our Q1 FY2026 results in more detail. Thanks, Dan. And thanks, everyone, for your attendance today. Continued operational execution was reflected in another quarter of strong financial performance.

Q1 FY26 marked our fifth consecutive quarter of record revenues, with total revenue reaching $240 million, up 28% quarter over quarter and 355% year over year. Operating expenses increased primarily on account of higher depreciation, reflecting ongoing growth in our platform and our higher SG&A, the latter primarily driven by a materially higher share price, resulting in acceleration of share-based payment expense and a higher payroll tax expense associated with employees. $63 million were both significantly up, largely on account of unrealized gains on prepaid forward and cap call transactions entered into in connection with our convertible note financings. Adjusted EBITDA was $92 million, reflecting continued margin strength, partially offset by that higher payroll tax of $33 million accrued in the quarter on account of strong share price performance. Turning now to our recently announced AI cloud partnership with Microsoft.

As Dan mentioned, this is a very significant milestone for IREN. It not only delivers strong financial returns but also creates a significant long-term strategic partnership for the business. Focusing on the financials, the $9.7 billion contract is expected to deliver approximately $1.9 billion in annual revenue once the four phases come online, with an estimated 85% project EBITDA margin. This strong margin, which reflects our vertically integrated model, incorporates all direct operating expenses across both our cloud and data center operations, supporting the transaction, including power, salaries and wages, maintenance, insurance, and other direct costs.

These cash flows deliver an attractive return on the cloud investment, i.e., the $5.8 billion CapEx for the GPUs and ancillaries, after deducting an appropriate internal colocation charge, ensuring that the project delivers robust cloud returns as well as an attractive return on our long-term investment in the Horizon data centers, which will deliver returns for many years into the future. The transaction has also a number of features that allow us to undertake the transaction in a capital-efficient way. Firstly, the payments for the CapEx are aligned with the phase delivery of the GPUs across the calendar year 2026 as we deliver those four phases. Secondly, the $1.9 billion in customer prepayments, being 20% of total contract revenue, paid in advance of each tranche, provides funding for circa one-third of the funding requirement at the outset.

Thirdly, the combination of the latest generation of GPUs and the very strong credit profile of Microsoft should allow us to raise significant additional funding, secured against the GPUs and the contracted cash flows on attractive terms. While the final outcome will be subject to a range of considerations and factors, we are targeting circa $2.5 billion through such an initiative, and depending on final terms and pricing, there is meaningful upside to that, noting again the very high quality of our counterparty. We also have a range of options available to fund the remaining $1.4 billion, including existing cash balances, operating cash flows, and a mix of equity convertible notes and corporate instruments. On that note, turning more generally to CapEx and funding, we continue to focus on deepening our access to capital markets and diversifying our sources of funding.

We issued $1 billion in zero-coupon convertible notes during October, which was extremely well supported, and we also secured an additional $200 million in GPU financing to support our AI cloud expansion in Prince George, bringing total GPU-related financings to $400 million to date at attractive rates. Taking into account recent fundraising initiatives, our cash at the end of October stood at $1.8 billion. Our upcoming CapEx program, which includes the construction of the Horizon data centers for the Microsoft transaction, will be met from a combination of this strong starting cash position, operating cash flows, the Microsoft prepayments, as just noted, and other financing streams that are underway. These include.

The GPU financing facilities that we discussed, as well as a range of other options under consideration from other forms of secured lending against our fleet of GPUs and data centers through to corporate-level issuance, whilst maintaining an appropriate balance between debt and equity to maintain a strong balance sheet. With that, we'll now turn the call over to Q&A. Thank you. If you wish to ask a question, please press Star 1 on your telephone and wait for your name to be announced. If you wish to cancel your request, please press Star then 2. If you're using a speakerphone, please pick up the handset to ask your question. The first question today comes from Nick Giles from B Riley Securities. Please go ahead. Yeah, thank you, Operator. And hi, everyone. Thanks so much for the update today.

Guys, I want to congratulate you on this significant milestone with Microsoft. This was really great to see. I have a two-part question. Dan, you mentioned strategic value, and I was first hoping you could expand on what this deal does from a commercial perspective. Secondly, I was hoping you could speak to the overall return profile of this deal and how you think about hurdle rates for future deals. Thank you very much. Sure. Thanks, Nick. Appreciate the ongoing support. In terms of the strategic value, I think undoubtedly proving that we can service one of the largest technology companies on the planet has a little bit of strategic value. Below that, the fact that this is our own proprietary data center design.

We have designed everything from the substation down to the nature of the GPU deployment, and that has been deemed acceptable by a trillion-dollar company. I think that has a bit of strategic value, both in terms of demonstrating to capital markets and investors that we are on the right track, but also importantly, in terms of the broader customer ecosystem and that validation. We have seen that play out over the days since the announcement. In terms of hurdle rates and returns, I think it is worth, Anthony, if you can, to jump into this. I think it is fair to say that IRRs, hurdle rates, and financial models have dominated our lives for the last six weeks. There is probably a little bit we can outline in this regard. Sure. Thanks, Dan. Thanks for the question. Just in terms of.

The returns on the transaction, obviously, as I noted in the introductory comments, when we look at the cloud returns, we obviously take away what we think to be an arm's length colocation rate, so effectively charge the deal for the cost of renting the data center capacity. After we take that into account on an unlevered basis, and assuming that there are zero cash flows or RV associated with the GPUs after the term of the contract, we expect an unlevered IRR of low double digits. Obviously, we'll be looking to add some leverage to the capital structure for the transaction, as we also discussed. Once we take that target $2.5 billion of additional leverage into account, you're achieving a levered IRR in the order of circa 25-30%. Obviously, that is assuming that $2.5 billion package.

It also assumes that the remaining funding is coming from equity as opposed to other sources of capital, which we might also have access to. I'd also note that we said that there might well be upside on that $2.5 billion. Obviously, at a $3 billion leverage package against the GPUs and a secured financing package, you could see that levered return increase by circa 10%. In terms of the RV, obviously, in those numbers, we're just reflecting zero economic value in the GPUs at the end of the term. If, for example, you were to assume a 20% RV, obviously, that has a material impact. Unlevered IRRs would increase to high teens, and your levered IRRs would be somewhere between 35-50%, depending on your leverage assumptions. Yeah. I think maybe just to jump in as well. Thanks, Anthony. That's all absolutely correct.

There are a lot of numbers in there, which is demonstrative of the amount of time we've spent thinking about IRRs. I think just to reiterate a couple of points, one is we've clearly divided out our business segments into standalone operations for the purposes of assessing risk return against a prospective transaction. To be really clear, all of those AI cloud IRRs assume a colocation charge. They assume a revenue line for our data centers. Our data centers, we've assumed, earn internally $130 per kilowatt per month escalating, which is absolutely a market rate of return, particularly considering the first five years is underwritten by a hyperscale credit. That is probably the first point I'd make. It's also really important to mention that we've optimized elsewhere.

The 76,000 GPUs that we've procured for this contract at a $5.8 billion price, Dell have really looked after us to the point where they've got an inbuilt financing mechanism in that contract where we don't have to pay for any GPUs until 30 days after they're shipped. There are further enhancements there. The final point I'd reiterate is this 20% prepayment, which I don't believe we've seen elsewhere, accounts for a third of the entire CapEx of the GPU fleet. I guess we've been asked previously why we would prefer to do AI cloud versus colocation. As one very single small data point, we are getting paid a third of the CapEx upfront here as compared to having to give away big chunks of equity in our company to get access to a colocation deal.

We're really pleased to lead us towards that $3.4 billion in IRR by the end of 2026 on returns that are pretty attractive. Yeah, it's a good result. Anthony, Dan, I really appreciate all the detail there. One more, if I could, I was just wondering if you could give us a sense for the number of GPUs that will ultimately be deployed as part of the Microsoft deal. And then as we look out to year six and beyond, I mean, can you just speak to any of the kind of future proofing you've done of the Horizon platform and what can ultimately be accommodated in the long term for future generations of chips? Yep. I'm happy to jump in and take that one, Dan. In terms of the number of GPUs to services contract.

Draw your attention to some of our previous releases where we've said that each phase of Horizon would accommodate 19,000 GB300s. Obviously, we're talking about four phases here with respect to that. In terms of future proofing of the data centers, there are a number of elements to it, but the primary one is that we have designed for rack densities here that are capable of handling well in excess of the GB300 rack architecture. To give you specific numbers there, the GB300s are around 135 kilowatts a rack for the GPU racks. Our design at the Horizon facilities can accommodate up to 200 kilowatts a rack. That is the primary area where we have future proofed the design. As Dan also mentioned in the remarks on the presentation, we have enhanced the design in a number of ways, including.

Effectively what is full tier 3 equivalent concurrent maintainability. So there are a number of elements that have been accommodated into the data centers to ensure that they can continue to support multiple generations of GPUs. Very helpful, Ken. Guys, congratulations again, and keep up the good work. Thank you. The next question comes from Paul Golding from Macquarie. Please go ahead. Thanks so much for taking the question and congrats on the deal and all the progress with HPC. I wanted to ask, I guess this is a quick follow-on to the IRR question. Just on our back of the envelope math, it looks like pricing per GPU hour may be on the rise or at the higher end of that $2-$3 range, assuming full utilization, so presumably potentially even higher.

How should we think about the pricing dynamics in the marketplace right now on cloud, given the success of this deal and what seems to be fairly robust pricing? And then I have a follow-up. Thank you. Sure. Hi, Paul. Sorry. You go ahead, Dan. Sorry. Look, I'll let Kent talk a bit more about the market dynamic, but it is absolutely fair to say that we're seeing a lot of demand. That demand appears to increase month on month in terms of the specific dollars per GPU hour. We haven't specified that exactly. However, we have tried to give a level of detail in our disclosures, which allows people to work through that. I think, importantly for us, rather than focusing on dollars per GPU hour, which I think your statement is correct, is focus on the fundamental risk return proposition of any investment.

When we've got the ability to invest in an AI cloud delivering what is likely to be in excess of 35% levered IRRs against a Microsoft credit, I mean, you kind of do that every day of the week. Yeah. Thanks, Dan. Paul, with regard to your specific question around demand, we continue to see very good levels of demand across all the different offerings we have. The air-cooled servers that we are installing up in our facilities in Canada lend themselves very well to customers who are looking for 500-4,000 GPU clusters and want the ability to scale rapidly. As we've discussed before, transitioning those existing data centers over from their current use case to AI workloads is a relatively quick process, and that allows us to service the growth requirements of customers in that class very well.

Case in point, we've been able to pre-contract for a number of the GPUs that we purchased for the Canadian facilities well in advance of them arriving out at the sites. This is something that customers have historically been pretty reticent to do. That level of demand exists in the market, as well as ongoing trust and credibility of our platform. With both existing and new customers, that is allowing us to take advantage and pre-contract a lot of that away. Obviously, with respect to the Horizon One buildout for Microsoft, this is the top-tier liquid-cooled capacity from NVIDIA. We continue to see extremely strong demand for that type of capacity.

The fact that we are able to offer that means that we can genuinely serve all customer classes from hyperscalers, the largest foundational AI labs, and largest enterprises with that liquid-cooled offering down to top-tier AI startups and smaller-scale inference enterprise users at the BC facilities. Thanks for that color, Kent and Dan. As a follow-up, as we look out to Sweetwater One Energization coming up fairly soon here in April, are you able to speak to any inbound interest you're getting on cloud at that site? I know it's early days just from a construction perspective, maybe for the facilities themselves, but any color there and maybe whether you would consider hosting at that site, given the return profile and potential cash flow profile that you would get from engaging in the cloud business over a period of time. Thank you. Yeah.

In terms of the level of interest and discussions that we're having, we're seeing a strong degree of interest across all of the sites, including Sweetwater as well. Obviously, very significant capacity available at Sweetwater, as Dan mentioned. With initial energization there in April 2026, which is extremely attractive in terms of the scale and time to power. I think it's very fair to say that we're seeing strong levels of interest across all the potential service offerings. As it relates to GPU as a service and colocation, as previously, we will continue to do what we think is best in terms of risk-adjusted returns. Anthony outlined the risk-adjusted returns that we're seeing in colocation—sorry, in GPU as a service specifically—at the moment. As we've outlined over the past number of months, that does look more attractive to us today.

As we continue to see increasing supply-demand imbalance within the industry, that may well feed through into colocation returns where it makes sense to do that in the future. As it stands today, certainly the return profile that we're seeing in GPU as a service, we think, is incredibly attractive. Great. Thanks so much and congrats again. Thank you. The next question comes from Brett Knoblauch from Cantor Fitzgerald. Please go ahead. Hi, guys. Thanks for taking my question. On the $5.8 billion order from Dell, can you maybe parse out how much of that is allocated to GPUs and the auxiliary equipment? On the auxiliary equipment, say you wanted to retrofit the Horizon data centers with new GPUs in the future, do you also need to retrofit the auxiliary equipment?

Out of that total order amount, I mean, it's fair to say the GPUs constitute the vast majority of it, but there are some substantial amounts in there for the backend networking for the GPU clusters, which is the top-tier InfiniBand offering that's currently available. In terms of future proofing, we'll have to see how much of that equipment may or may not be reusable for future generations of GPUs. As I was referring to earlier, the vast majority of our data center equipment and the way that we have structured the rack densities within the data center mean that the data center itself is future proofed. In terms of the specific equipment for this cluster, it remains to be seen whether that will be able to be reused. Perfect. Thank you.

On maybe the new 40,000 order that sounds like it's going to be plugged in in Canada, you talked about maybe a very efficient CapEx build for those data centers. Could you maybe elaborate a bit more on that? I know when the AI craze maybe first got started 18 months ago, you guys flagged that you guys were running GPUs up in. Ensured that you built for less than $1 million a megawatt. Are we closer to that number for this, or are we just well below maybe what the Horizon One Four costs even per megawatt basis? Yeah. In terms of the basic transition of those data centers over to AI workloads, it is relatively minimal in terms of the CapEx that is required.

The vast majority of the work is removing ASICs, removing the racks that the ASICs sit on, and replacing those with standard data center racks and PDUs, so the power distribution units that can accommodate the AI servers. That is relatively minimal. As we've discussed before, it's a matter of weeks to do that conversion. From a CapEx perspective, it is not material. The one element that may be more material in terms of that conversion is adding redundancy, if required, to the data centers. That would typically cost around $2 million a megawatt if we need to do that. Obviously, in the context of a full buildout like we're seeing of liquid-cooled capacity at Horizon, it's extremely capital and CapEx efficient. Awesome. Thank you, guys. I'll hop back in the queue. Congrats again. Thank you. The next question comes from Dylan Heslin from Roth Capital Partners.

Please go ahead. Hey, thanks for taking my questions and passing on our congrats on the Microsoft deal as well. To start, with Microsoft, was colocation ever on the table with them? Did they come to you asking for AI cloud, or how did those negotiations sort of fall out? Just thinking about the best way to answer this. So we've been talking to Microsoft for a long period of time, and the nature of those conversations absolutely did evolve over time. Is their preference the cloud deal? Possibly. At the end of the day, we want to focus on cloud, and that was the transaction we were comfortable with. Conversations really focused around that over the last six weeks or so. I think if I may, I'd talk more generically around these hyperscale customers because, obviously, we weren't just talking to Microsoft.

I think there probably is a stronger preference from those to be looking at more colocation and infrastructure deals rather than cloud deals. It also is the case that there's an appetite for a combination. It may be that we do some colocation in the future. Yeah, I think different hyperscalers have different preferences. We'll entertain them all. Given the nature of the deal we did with a 20% prepayment, funding a third of CapEx, and a 35%+ equity IRR, we're feeling pretty good about pursuing AI cloud. Got it. Thank you. Just as a follow-up with the rest of Childress, is there any significance to the size of the Microsoft deal starting at 200 megawatts? Do they have interest in the rest of the campus? Have you talked to them about that yet?

Again, I'm going to divert the question a little bit because we've got some pretty strong confidentiality provisions. Let me talk generically. There is appetite from a number of parties in discussing cloud and other structures well above the 200 megawatts that's being signed with Microsoft. Okay. Great. Thanks. Thank you. The next question comes from John Tadaro from Needham. Please go ahead. Great. Thanks for taking my question and congrats on the contract. I guess just one on that as we dig a little bit more in. Any kind of penalties or anything related to the timeline of delivering capacity? Just wondering if there's guardrails around that. I do have a follow-up on CapEx. There's always a penalty, whatever you do in life, if you don't do what you promise you're going to do.

We're very comfortable with the contractual tolerances that have been negotiated, the expected dates versus contractual penalties and other consequences. I can't comment more specifically beyond that on this call. The other thing I would reiterate is we have never, ever missed a construction or commissioning date in our life as a listed company. I think you can take a lot of comfort that if we've put something forward to Microsoft and agreed it there, and if we've put something forward to the market, our reputations are on the line, our track record is on the line, we're going to be very confident we can deliver it, and potentially even exceed it. Got it. Understood. Just following up on the CapEx, that $14 million-$16 million on the, I think it was the data center side.

Just wondering if there's anything kind of additional in there that would get it north of the colo items other folks are talking about, if maybe there's some networking or cabling included in that, or any contribution from tariffs are being considered there? Yeah. Sorry. To give some additional color there. Yes, in terms of networking, etc. Again, as Dan mentioned in his presentation earlier. This is designed, the Horizon campus is designed to be able to operate 100 megawatt superclusters. That does raise a significant level of additional infrastructure that is required over being able to deliver smaller clusters. Certainly, some of the costs that are in the number that you mentioned are related to the ability to do that. That will not necessarily be a requirement of every customer moving forward. That probably is an element that is somewhat unique. Understood.

Thank you, guys. Thank you. The next question comes from Stephen Glagola from Jones Trading. Please go ahead. Hey, thanks for the question. On your British Columbia GPUs, can you maybe just provide an update on where you guys stand with contracting out the remaining 12,000, I believe, GPUs of the initial 23,000 batch? And are you seeing any demand for your bare metal offering in BC outside of AI-native enterprises? Thank you. Yeah. Happy to give an update there. We'd previously put out guidance a couple of weeks ago that we'd contracted 11,000 out of the 23,000 that were on order. Subsequent to that, we have contracted a bit over another 1,000 GPUs. Primarily, the ones that are not yet contracted are the ones that are arriving latest in terms of delivery timelines. As I mentioned earlier, we are seeing an increased appetite from customers to pre-contract.

These are GPUs that are a little further out in terms of delivery schedules relative to the ones that have already been contracted. Having said that, we continue to see very strong levels of demand. We are in late-stage discussions around a significant portion of the capacity that has not yet been contracted. We continue to see very good demand leading into the start of next year as well and are receiving an increasingly large number of inbounds from a range of different customer classes. You mentioned AI natives. Yes, that has been a portion of the customer base that we've serviced previously, but we are also servicing a number of enterprise customers on an inference basis. It is a pretty wide-ranging customer class that we're servicing out of those British Columbia sites. Thanks. Appreciate it. Thank you.

The next question comes from Joe Vafi from Canaccord Genuity. Please go ahead. Joe, your line is open if you'd like to ask your question. I'll move on to the next question in the—oh, sorry. Yeah. Sorry, guys. Really sorry. Congrats from me too. On Microsoft. Just. Maybe, Dan, if you could kind of walk us through what you were thinking in your head. Clearly, some awesome IRRs here on the Microsoft deal. But. How are you thinking about risk on a cloud deal here versus a straight colo deal, which probably wouldn't have had the return, but maybe the risk. Profile may be lower there? And then I'll just quick follow-up. Thanks. Thanks, Joe. Look, it's funny. I actually see risk very differently. So. We've spoken about colocation deals with these hyperscalers.

If you model out a 7-8% starting yield on cost and run that through your financial model, what you'll generally see is that you'll struggle to get your equity back during the contracted term. You are relying on recontracting beyond the end of that 15-year period to get any sort of equity return. In terms of risk, I would argue that there's a far better risk proposition implicit in the deal that we've signed, going down the cloud route. For the shorter-term contracts on the colo side, where you may not have a hyperscale credit, you're running significant GPU refresh risk against companies that don't necessarily have the balance sheet today to support confidence in that GPU refresh. Again, we think about it in business segments. We think about our data center business.

Has got a great contract internally linked to Microsoft as a tenant. That data center itself is future-proofed, accommodating up to 200 megawatt rack densities. It is also the case that in five years, the optionality provides further downside protection. Upon expiry of the Microsoft contract, maybe we can run these GPUs for additional years, which we've seen with prior generations of GPUs like the A100s. Assuming that isn't the case, we've got a lot of optionality within that business. We could sign a colocation deal at that point. We could relaunch a new cloud offering using latest-generation GPUs. My concern with these colocation deals is what you're doing is you're transferring an interest or an exposure to an asset that is inherently linked to this exponential world of technology and demand and the upside that that may entail.

You're swapping that for a bond position in varying degrees of credit with the counterparties. If you're swapping an asset for a bond exposure to a trillion-dollar hyperscaler and you're kind of hoping you might get your equity back after the contracted period, I mean, that's one way to look at it. If you're swapping your equity exposure for a bond exposure in a smaller neocloud without a balance sheet, then is that a good decision for shareholders? We just haven't been comfortable. I get it, Dan. We've run some DCFs here and on some colo deals here in the last couple of months. There's a lot to be learned when you do it. There's no doubt. Just on this prepayment from Microsoft, I know you've got some strong NDAs here, but kind of a feather in your cap on getting.

That much in a prepayment. Anything else to say on how maybe your qualifications or how Microsoft perhaps and you came to the agreement to pre-fund the GPU purchases out of the box? Thank you. Look, yeah, getting a third of your CapEx funded through a prepayment from the customer is fantastic from our perspective. We are super appreciative for Microsoft coming to the table on that. What that allows us to do is to drive a really good IRR and return to equity for our shareholders. Again, linking back to what Anthony said earlier, we expect 35% equity IRRs from this transaction accounting after an internal data center charge. Trying to create that apples-to-apples comparison for a neocloud that has an infrastructure charge, even after that, we are looking at 35% plus.

Also, what's really important to clarify is the equity portion of that IRR we have assumed is funded with 100% ordinary equity, which, given our track record in raising convertibles, given the lack of any debt at a corporate level, is probably conservative again. From a risk-adjusted perspective linked to a trillion-dollar credit and the ability to fund it efficiently, I mean, we're really happy with the transaction. Yeah, hopefully, there's more to come. Great. Thanks, Dan. Thank you. The next question comes from Michael Donovan from Compass Point. Please go ahead. Thanks for taking my question and congrats on the progress. I was hoping you could talk more to your cloud software stack and the stickiness of your customers. Yeah, I'm happy to take that one. Yeah, to date, the vast majority of our customers have required a bare metal offering. That is their preference.

These are all highly advanced AI or software companies like a Microsoft. They have significant experience in the space, and they want the raw compute and the performance benefits that that brings, having access to a bare metal offering and then being able to layer their own orchestration platform over the top of that. That has been by design, that we have been offering a bare metal service. It lends itself exactly to what our customers are looking for. Having said all of that, we obviously are continuing to monitor the space, continuing to look at what customers want. We are certainly able to go up the stack and layer in additional software if it is required by customers over time. Today, as I said, we have not really seen any material levels of demand for anything other than the bare metal service that we are currently offering.

I think maybe just to add to that, Kent, if you step back and think about it, you're contracting with some of the largest, most sophisticated technology companies on the planet that want access to our GPUs to run their software. It's kind of upside-down world to then turn around and say, "Oh, we'll do all the software and operating layer." Clearly, they're in the position they are because they have a competitive advantage in that space. They're just looking for the bare metal. I think as the market continues to develop over coming years, it may be the case that if you want to service smaller customers that don't have that internal capability or budget, then yes, maybe you will open up. Smaller segments of the market.

For a business like ours that is pursuing scale and monetizing a platform that we've spent the last seven years building, it's very hard to see how you get scale by focusing on software, which is, I think everyone generally accepts, is going to be commoditized anyway in coming years, as compared to just selling through the bare metal and letting these guys do their thing on it. That makes sense. I appreciate that. Now, you mentioned design works that are complete for a direct fiber loop between Sweetwater One and Two. How should we think about those two sites communicating with each other once they're live? Yeah, I think really the best way to think about it is it just adds an additional layer of optionality as to the customers that would be interested in that and how we contract those projects.

There are a number of customers out there who are looking particularly for scale in terms of their deployments. Obviously, being able to offer 2 gigawatts that can operate as an individual campus, even though the physical sites are separated, is something that we think has value. That is why we have pursued that direct fiber connection. Appreciate that. Thank you, guys. Thank you. At this time, we are showing no further questions. I will hand the conference back to Dan Roberts for any closing remarks. Great. Thanks, operator. Thanks, everyone, for dialing in. Obviously, it has been an exciting couple of months, and particularly last week, our focus now turns to execution to deliver 140,000 GPUs through the end of 2026.

Also continuing the ongoing dialogue with a number of different customers around monetizing the substantial power and land capacity we've got available and our ability to execute and deliver compute from that. I appreciate everyone's support. I look forward to the next quarter.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers
© 2007-2025 - Fusion Media Limited. All Rights Reserved.