Oil prices steady near 1-mth high on US-Iran sanctions; OPEC+ meeting awaited
IREN Ltd, a prominent player in the Bitcoin mining and AI cloud sectors, reported its Q4 2025 earnings with record annual revenue and a significant market reaction. The company exceeded revenue forecasts, leading to a notable after-hours stock price increase of 3%. IREN’s financial performance was marked by a strong net income and an impressive growth in operational capacities. According to InvestingPro data, the company boasts an exceptional gross profit margin of 91.66% and has delivered a remarkable 210.51% return over the past year.
Key Takeaways
- Record annual revenue of $187 million, surpassing forecasts.
- Stock price increased by 3% in after-hours trading.
- Significant expansion in Bitcoin mining and AI cloud capacities.
- Strong net income of $177 million and a cash reserve of $565 million.
- Targeting $200-250 million annualized AI cloud revenue by December.
Company Performance
IREN Ltd demonstrated robust performance in Q4 2025, achieving record annual revenue of $187 million. The company’s strategic focus on expanding its Bitcoin mining and AI cloud operations contributed to a tenfold growth in EBITDA year-on-year. This performance is set against a backdrop of growing demand for high-performance compute solutions, positioning IREN favorably within the industry.
Financial Highlights
- Revenue: $187 million, exceeding the $183 million forecast.
- Earnings per share: Data not provided; analysis focused on revenue performance.
- Net income: $177 million.
- Cash reserves: $565 million.
- Total assets: $2.9 billion.
Earnings vs. Forecast
IREN Ltd’s actual revenue of $187 million surpassed the forecast of $183 million, resulting in a 2.34% revenue surprise. This positive deviation from expectations reflects the company’s effective execution of its strategic initiatives, particularly in expanding its Bitcoin mining and AI cloud capacities.
Market Reaction
Following the earnings announcement, IREN’s stock price rose by 3% in after-hours trading, reaching $22.81. This increase reflects investor confidence in the company’s growth trajectory and its ability to exceed revenue expectations. The stock’s performance is notable given its proximity to the 52-week high of $24.29. With a market capitalization of $4.76 billion and analyst price targets ranging from $16 to $27, InvestingPro analysis suggests the stock is currently trading above its Fair Value. Investors should note the stock’s high volatility, with a beta of 3.99.
Outlook & Guidance
Looking forward, IREN aims to achieve $200-250 million in annualized AI cloud revenue by December. The company plans to deploy over 60,000 NVIDIA GB300 GPUs and expand its data center capacity with the development of Horizon 1 and Horizon 2 facilities. These initiatives underscore IREN’s commitment to scaling its operations and enhancing its competitive position in the market. InvestingPro has identified 18 additional key investment factors for IREN, available exclusively to subscribers, along with a comprehensive Pro Research Report that provides deep-dive analysis of the company’s growth trajectory and market position.
Executive Commentary
Daniel Roberts, Co-CEO, emphasized the company’s strategic foresight, stating, "Our ability to preempt those digital demands and build for tomorrow, position for tomorrow rather than where we are today is a key competitive advantage." Roberts also highlighted IREN’s successful Bitcoin mining operations, noting, "We never missed a milestone on Bitcoin mining where the most profitable, if only profitable, would be Bitcoin mining because we did things properly from the start."
Risks and Challenges
- Market volatility in Bitcoin prices could impact mining revenue.
- Potential supply chain disruptions affecting GPU deployment.
- Regulatory changes in cryptocurrency and AI sectors.
- Increasing competition in the AI cloud market.
- Infrastructure constraints in expanding data center capacities.
Q&A
During the earnings call, analysts inquired about IREN’s flexible GPU deployment strategies and power efficiency across its data centers. Executives clarified the company’s approach to colocation and cloud services, as well as its financing strategies for GPU and infrastructure expansion. These discussions highlighted IREN’s focus on operational efficiency and strategic growth.
Full transcript - IREN Ltd (IREN) Q4 2025:
Conference Operator: Good day, and thank you for standing by. Welcome to the IRAN FY twenty twenty five Results Conference Call. At this time, all participants are in a listen only mode. Please be advised that today’s conference is being recorded. After the speakers’ presentation, there will be a question and answer session.
I would now like to hand the conference over to your speaker today, Mike Power, VP, Investor Relations.
Mike Power, VP, Investor Relations, Iron: Thank you, operator. Good afternoon, and welcome to Iron’s FY twenty twenty five results presentation. My name is Mike Power, VP of Investor Relations. And with me on the call today are Daniel Roberts, Co Founder and Co CEO Belinda Nussafora, CFO Anthony Lewis, Chief Capital Officer and Kent Draper, Chief Commercial Officer. Before we begin, please note that this call is being webcast live with a presentation.
For those that have dialed in via phone, you can elect to ask a question via the moderator after our presentation. Before we begin, I would like to remind you that certain statements that we make during the conference call may constitute forward looking statements and Iron cautions listeners that forward looking information and statements are based on certain assumptions and risk factors that could cause actual results to differ materially from the expectations of the company. Listeners should not place undue reliance on forward looking information or statements. Please refer to the disclaimer on Slide two of the accompanying presentation for more information. Thank you.
And I will now turn the call over to Dan Roberts.
Daniel Roberts, Co-Founder and Co-CEO, Iron: Thanks, Mike. Good afternoon, everyone, and thank you for joining our FY twenty twenty five earnings call. So today, we will provide an update on our financial results for the fiscal year ended June 30, along with some operational highlights and strategic updates across our business verticals. We’ll then end the call with Q and A. So FY ’twenty five was a breakout year for us, both operationally and financially.
We delivered record results across the board, including 10 times EBITDA growth year on year and strong net income, which Linda will discuss shortly. Operationally, we scaled at an unprecedented pace. We increased our contracted grid connected power by over a third to nearly three gigawatts and more than tripled our operating data center capacity to eight ten megawatts. All
Conference Operator: at
Daniel Roberts, Co-Founder and Co-CEO, Iron: a time when power, land and data center shortages continue to persist across the industry. We expanded our Bitcoin mining capacity 400% to 50x a hash. And in the process, cemented our position as the most profitable large scale public Bitcoin miner. At the same time, we made huge strides in AI, scaling GPU deployments to support a growing roster of customers across both training and inference workloads. We also commenced construction of Horizon 1, our first direct to chip liquid cooling AI data center and Sweetwater, our two gigawatt data center hub in West Texas, one of the largest data center developments in the world and a cornerstone of our future growth plans.
These achievements underscore the strength of our execution and the earnings potential of our expanding data center and compute platform. We expect this momentum to carry into FY 2026 and beyond as we realize the revenue potential of our 50xAhash platform and advance our core AI growth initiatives. So reflecting on current operations, our AI cloud business is scaling rapidly with more than 10,000 GPUs online or being commissioned in the coming months. Backed by multiple tranches of non dilutive single digit GPU financing, this rollout will feature next generation liquid cooled NVIDIA GB300 NVL72 systems at our Prince George campus. This strengthens our position as a leading AI cloud provider and a newly designated NVIDIA preferred partner.
In parallel, while we paused meaningful mining expansion, our 50x Hash platform continues to generate meaningful cash flow over $1,000,000,000 a year in annualized revenue at the current economics, supporting our continued growth in AI. Together, these operations are approaching annualized revenue of $1,250,000,000 That’s the scale we’re delivering today. However, the clear visibility to continue growth ahead is something we’re quite excited about. On that note, our strategy is focused on scaling across the full AI infrastructure stack from grid connected transmission line all the way down to the digital world compute. With a strong track record, building power dense data centers and operating GPU workloads, we are uniquely positioned to serve AI customers end to end from cloud services to turnkey location, capturing a broad and growing addressable market.
Beyond the current expansion to 10,000 GPUs, 60 megawatts of operating capacity in BC provides a path to deploy more than 60,000 NVIDIA GV300s with Horizon one then offering the potential to scale that further to nearly 80,000. As we continue to assess market demand, this gives line of sight to billions in annualized revenue from our AI cloud business alone. In terms of AI data centers, we’re progressing three major data center projects to drive this revenue growth, as well as provide scope for future expansion. At Prince George in BC, we’re continuing to transition existing capacity from Bitcoin to AI workloads with retrofits for air cooled GPUs and the construction of a newly announced liquid cooled data center underway to support our GB300 deployment. At Childress, Horizon one continues to remain on schedule for Q4 twenty twenty five.
Given strong demand signals, we’ve also begun site works and long lead procurement for Horizon 2, a Secwyd liquid cooled facility. Together, these projects can support over 38,000 NVIDIA GB300s. At Sweetwater, our flagship two gigawatt data center hub in West Texas, Sweetwater One remains on track for energization in April 2026. Construction is progressing well with key long lead equipment either on-site already or on order. Upgrades to the utility substation have now commenced.
So in summary, we’ve delivered record performance this year. We’ve got a clear AI growth path with near term milestones. And most excitingly, we continue to position our platform ahead of the curve to monetize substantial opportunities in the AI infrastructure and compute markets. I’ll now hand over to Belinda who will walk through the FY twenty five results in more detail.
Belinda Nussafora, CFO, Iron: Thank you, Dan. Good morning to those in Sydney, and good afternoon to those in North America. As noted in our recent disclosures, we’ve completed our transition to a US domestic issuer status from the July 1. And as such, we’ve reported our full year results for the period ended thirty June twenty twenty five under US GAAP and the required SEC regulations. For the ’25, we delivered a record revenue of $187,000,000 being an increase of $42,000,000 from the previous quarter, primarily due to the record mining Bitcoin of 180,000,000,000 as we operate at 50 exahash.
During the quarter, we also delivered AI cloud revenue of 7,000,000. Our Bitcoin mining business continues to perform strongly, supported by best in class fleet efficiency at 15 joules per terahash and low net power costs being 3.5¢ per kilowatt hour in Q4. Whilst our operating expenses increased to $114,000,000 primarily due to overheads and depreciation costs associated with our expanded data center platform and increased Bitcoin mining and GPU hardware, we’ve delivered a strong bottom line of 177,000,000 High margin revenues from our Bitcoin mining operations were a key driver of this profitability with an all in cash cost of 36 k per bitcoin mined versus an average realized price of 99 k. Noting that these all in costs incorporate expenses across our entire business, including the AI verticals, underscoring the strength of our platform. We closed the financial year with approximately $565,000,000 of cash and $2,900,000,000 in total assets, giving us a strong balance sheet to support the next stage of growth.
I’ll now hand back to Dan to discuss the exciting growth opportunities that continue for Iron.
Daniel Roberts, Co-Founder and Co-CEO, Iron: Thanks, Melinda. So I think it’s fair to say that the market backdrop for our AI cloud business is pretty compelling. Industry reports demonstrate accelerating enterprise adoption of AI solutions and services with the percentage of organizations leveraging AI in more than one business function growing from 55% to 78% in the last twelve months alone. As almost all of us would know, demand is accelerating faster than supply. New model development, sovereign AI programs and enterprise adoption are driving a step up in GPU needs and the constraint is infrastructure and compute, not customer interest.
Power availability, GPU ready, high density data center capacity remains scarce, with customers prioritizing speed to deploy and the ability to scale. Iron is uniquely positioned to meet this demand. Our vertical integration gives us control over the key bottlenecks: significant near term grid connected power with data centers engineered for next generation power dense compute. This enables accelerated delivery timelines and rapid low risk scaling. Because we own and operate the full end to end stack, we are able to deliver superior customer service, tighter control over efficiency, uptime, and service quality, translating directly into a better customer experience for our customers.
We are leading our service with a bare metal service because it gives sophisticated developers, cloud providers, and hyperscalers what they want most direct access to compute and the flexibility to bring their own orchestration. As and when customer needs evolve, we have the flexibility to layer in software solutions to provide additional options to the customer. Our new status as an NVIDIA preferred partner is helpful in that regard. It enhances supply access and helps broaden our customer pipeline, supporting expansion across both existing relationships and new end users, platforms and demand partners. So the market is large, it’s accelerating, supply is constrained and we have the platform to meet market demand for AI cloud and meet that reasonably quickly.
That is why we’re immediately scaling to more than 10,000 GPUs, but also now importantly focusing on what comes next. So our 10,000 GPU expansions underway. With it, we will be positioned at the front of the Blackwell demand curve, delivering first to market benefits. We saw this with our initial B200 deployment several weeks ago. Upon commissioning, it was immediately contracted on a multi year basis.
Importantly, we are funding growth in a CapEx efficient way. In the past week alone, we have secured two new tranches of financing, which have funded 100% of the purchase price of new GPUs at single digit rates. Anthony will touch on this shortly as well as what’s next. In terms of revenue, these GPUs will be delivered and progressively commissioned over the coming months, targeting $2,000,000 to $250,000,000 of annualized revenue by December year. Approximately one exahash of ASICs will be displaced as a result, which we plan to reallocate to sites with available capacity, minimizing any impact to the overall 50xahash installed hash rate.
Finally, we also expect the strong margin profile of our AI cloud business to continue, underpinned by low power costs, but importantly, full ownership of our AI data centers, eliminating any third party colocation fees from our cost base. Our Prince George campus will anchor this next phase of our AI cloud growth. So as I alluded to earlier, we’re pleased to announce today that construction is well underway on a new 10 megawatt liquid cooled data center at Prince George, designed to support more than 4,500 Blackwell GB300 GPUs. Following this build out, half of Prince George’s capacity will now be dedicated to AI cloud services. There is then clear runway to double capacity to more than 20,000 GPUs at this site alone.
Procurement is also in progress to equip every GPU deployment at Prince George with backup generators and UPS systems. Beyond Prince George, Mackenzie and Canal Flats, our data center campuses in each of these locations create an even larger opportunity. With powered shells existing and designed to the same architecture as Prince George, these sites offer a straightforward and replicable pathway to more than 60,000 GB300s. Horizon One and our broader portfolio of data center sites in Texas opens up a further path to continued AI cloud growth. It’s fair to say we’re incredibly excited by the AI cloud opportunity.
It’s a business line that many are simply unable to pursue due to the significant technical expertise and requirements involved. With two to three year payback periods and the low cost GPU financing structures we are securing, we see this as a highly attractive pathway to continue compounding shareholder value. Our ability to build and operate world class AI services all the way down from the transmission line down to the compute layer uniquely positions our forefront of this digital AI transformation. Now onto the major projects driving our AI expansion. Childress continues to show strong on the ground momentum with Horizon one construction progressing according to schedule and remaining on track for this year.
As you can see in the progress photos, the data center buildings are nearing completion and the installation of the liquid cooling plant on the south side of the halls is underway. Based on customer feedback, we’ve also upgraded certain specifications, including introducing full Tier three equivalent redundancy across all critical power and cooling systems. Due to the expected timing gap before NVIDIA’s Rubin GPUs are available, we have also reconfigured the design to be able to accommodate a wider range of rack densities, while preserving the flexibility to accommodate next generation systems when they are available. Even with these adjustments, we expect to remain a very competitive build cost target, reflecting the efficiencies of our in house design, procurement and construction model. Finally, we’re also moving ahead with certain tenant scope work to derisk delivery timelines and provide additional flexibility, including the potential to monetize the capacity via our own AI cloud service.
In that regard, engagement remains active with both hyperscaler and non hyperscaler customers across both cloud and colocation opportunities. Site visits, diligence, commercial discussions, documentation ongoing. Building on this strong customer traction at Horizon one and general overall market momentum, we are pleased to announce that we’ve commenced early works and long lead procurement for Horizon two, a potential second 50 megawatt IT load liquid cooled facility at Childress. Together, Horizons one and 2 will have capacity to support over 38,000 liquid cooled GB300s, creating one of the largest clusters in The U. S.
Market. In saying that, it’s still modest compared to the capacity of our Sweetwater hub, which could support over 600,000 GB300s, which is a good segue, Sweetwater. So both construction and commercial momentum continues to build at the 1.4 gigawatt Sweetwater 1 site, still scheduled for energization in April 2026. As you can see in the progress photo, construction of the high voltage bulk substation is underway and key long lead equipment continues to arrive at site. On the commercial front, we’re advancing discussions with prospective customers for different structures.
The campus is inherently flexible by design, so we can meet demand across the entire AI infrastructure stack. Powered Shells for partners who want to self operate, turnkey colocation for customers seeking speed and cloud services for those who would like us to run it end to end. While we have a multitude of other exciting growth opportunities preceding this, Sweetwater’s combination of scale, certainty and flexibility positions it as yet another growth engine for iron in the accelerating wave of AI compute. So where do we sit today? Industry estimates call for more than 125 gigawatts of new AI data center capacity over the next five years with hyperscale CapEx forecast supporting the credibility of that trajectory.
Yet, as most of us know, existing grid capacity is well documented as being far from sufficient to meet this demand. Against that backdrop, we have expanded our secure power capacity more than 100x since IPO. We’ve built over eight ten megawatts of operational next generation data centers. In the process, demonstrating our ability to not only secure valuable powered land, but also deliver next generation data centers and compute at scale in some of the most demanding markets. It’s a really exciting time for the industry, and it’s a really exciting time for us.
With that hopefully providing a reasonably comprehensive overview of the opportunity in front of us, I’ll hand over to our newly appointed Chief Capital Officer, Anthony Lewis, to discuss financing.
Anthony Lewis, Chief Capital Officer, Iron: Thanks, Dan, and good morning or good evening, everyone, as the case may be. This slide highlights how we are funding growth across our AI verticals through a combination of strategic financings and strong cash flows from existing operations. To the right the table to the right, which many of you are familiar with, shows the illustrative cash flows from our existing Bitcoin mining operations. At the current network cash rate and 115,000 Bitcoin price, we show over $1,000,000,000 in mining revenue. And after subtracting all costs and overheads of our entire business, we arrive at close to $650,000,000 of adjusted EBITDA.
There is then a further 200,000,000 to $250,000,000 of annualized revenue on top of this expected to come from the AI cloud business expansion, with an increasing contribution from that business over time. There is clearly some sensitivity to the relevant assumptions here, but the key message is we expect significant operating cash flow to invest in our growth initiatives over a range of operating conditions, with our position enhanced by our low cost power and best in class hardware performance. These cash flows, together with existing cash and recent financing initiatives, which I’ll touch on shortly, fully fund our near term CapEx, including the cloud expansion discussed with liquid cooling and power redundancy at Prince George taking GPUs to 10,900, completing Horizon 1 and energizing Sweetwater 1 substations. Let me now turn to our funding strategy more generally. As a capital intensive business growing quickly, we are clearly focused on diversifying our sources of capital so that we maintain a resilient and efficient balance sheet.
The $200,000,000 of GPU financings we announced this week are a recent example of that. These transactions had 100% of the upfront GPU CapEx financed, allowing us to accelerate the growth flywheel for our AI cloud business and an attractive cost of capital. And they pay down over 2 to 3 years, matching well against the accelerated paybacks on the underlying hardware. End of lease term options instructions like these also give us added operational flexibility. We’re also seeing strong institutional demand for asset backed and infrastructure lending in the AI sector.
And with our existing portfolio of assets and the growth opportunities in front of us, we think Iron is well placed to access that capital. We are currently advancing a range of financing work streams which could support further growth. This could include further asset backed financing, project level and corporate level debt. We’ve also proven good access to the convertible bond market with two well supported transactions over the course of the financial year, and that remains a further source of funding potential for us. Of course, we’ll also be focused on maintaining a prudent level of equity capital as we continue to scale, ensuring continued balance sheet resilience.
So in closing, with a foundation and strong operating cash flow from existing operations and a broad range of capital sources available to us, we feel we are well placed to fund the next stage of growth. With that, we’ll now turn the call over to Q and A.
Conference Operator: Thank you. Our first question comes from Paul Golden with Macquarie. You may proceed.
Paul Golden, Analyst, Macquarie: Thanks so much. I wanted to ask on, efficiency at these sites. I noticed that, PUE in the British Columbia sites is down at 1.1, which is a very impressive efficiency ratio, versus sweet water being about 1.4. Those may be peak, numbers as opposed to average. But was wondering if you could give some color around how that might influence, the thought process around rollout or concentration of site receiving GPUs initially versus others as you think about, the efficiency?
And then, also just along the lines of this infrastructure being developed, with PUE that low being cited, how are you thinking about, the the backup generation for, the existing, pods that you have outstanding? I only, asked that question in relation to, the on demand versus, contracted customer dynamic and how you’re seeing that evolve. Thank you so much.
Kent Draper, Chief Commercial Officer, Iron: Hi, Paul. Happy to to jump in and take that one. So as you mentioned, across the BC sites, we’re operating at a PUA 1.1. That’s on an air cooled basis. Once we install the liquid cooled facilities there, we expect that to be operating on an average slightly higher than that, but still well under 1.2 PUE across the year.
At Childress, the Horizon one liquid cooled installation, the number that you mentioned is much closer to a peak PUA number, although we actually expect it to be less than 1.4 and the average PUA over the year to be around 1.2. In all cases, I think those are extremely competitive numbers across the industry. We are more led in terms of our deployments across the different sites by what our customers are ultimately demanding within British Columbia, the ability to scale extremely quickly on an air cooled basis has been a significant driver of demand for us. And again, that PUAE level is extremely competitive regardless. And so that is where we are seeing some of the primary interest from our customer base.
At Horizon one, that liquid cooled capacity in particular is extremely scarce in the industry at the moment. And the ability to locate a single cluster of up to 19 thou or just over 19,000 GB three hundreds is significantly attractive and and driving high levels of customer interest. So I think, yeah, less driven by PUE overall in terms of deployments and more driven by the customer side of the equation. To your question on redundancy, as as Dan mentioned in his remarks, we’re introducing redundancy across the entire fleet of GPUs that we have in our existing operating business as well as for the new GPUs that we purchased. While we believe that for many of the applications that these customers are used for, it’s not necessarily the the case that it’s required to have redundancy.
We have seen some of our customers wanting that redundancy and, you know, for us, we ultimately want to be driven by providing the best customer service, and that’s really what’s driving us to install that redundancy across the fleet.
Paul Golden, Analyst, Macquarie: Thanks, Ken. And if I could ask one quick follow-up on the g b 300 NVL 72 capability that has been incorporated or retrofitted into the original plan for Horizon one, I believe. If you could just give us any incremental color around what that may have entailed and any impact that may have had on how financing availability or future financing plans may be impacted as you think about incremental costs for that density and in particular maybe as you plan for Rubin given this preferred partner status now? Thank you.
Kent Draper, Chief Commercial Officer, Iron: Yes. I think what you’re referring to with Dan’s comments around introducing flexibility for a wider range of densities. And for us, that actually comes more towards lower densities. So being able to operate at at densities that are under what the Vera Rubins would require. So the the base design as we had it could handle up to 200 kilowatts of rack, easily able to accommodate the next iteration of GPUs.
But what we’re seeing in the market today is that many customers actually want flexibility to be able to operate not only at the rack densities for GB three hundreds, which are around a 135 kilowatts a rack, but actually at even lower densities to accommodate additional types of compute within the within the data center infrastructure. And so what what we’ve done is gone back and reworked some of the electrical and mechanical equipment to be able to actually accommodate lower rack densities. So as it relates to accommodating Rubens in the future, no change from our perspective.
Paul Golden, Analyst, Macquarie: Great. Thanks so much and congratulations.
Conference Operator: Thank you. Our next question comes from John Todaro with Needham. You may proceed.
John Todaro, Analyst, Needham: Hey, guys. Thanks for taking my question and congrats on a very strong quarter. First question on the cloud business. And apologies if I missed this, but just the average duration of the contract kind of trying to determine, you know, given the three year payback with the GPUs plus infrastructure, the overlap there with the customer contract duration. And then I also have a follow-up on the HPC side of things.
Kent Draper, Chief Commercial Officer, Iron: Yeah. We’ve we’ve got a range of contract lengths across our existing asset base today all the way from one month rolling contracts out to three year contracts. For the Neuagen equipment, including the Blackwell purchases that we’ve made, we’re typically seeing demand in slightly longer contract lengths whilst those Blackwells are, you know, new equipment on the market. And so a good indication of that is the initial portion of our b two hundreds that, as Dan mentioned, as soon as they were installed, we’re we were able to contract them on a multiyear basis. So we do have contracts across the spectrum, but we are seeing for newer gen equipment, you know, often longer term contracts being available.
John Todaro, Analyst, Needham: Got it. That’s great. And then just with the success you’re having so far in the cloud business, you could take a step back and think, you know, do we need to sign HPC co low capacity? Would you be more comfortable kind of continuing with this at even a bigger scale? And then as it relates to just kind of thoughts on the CapEx to get you there, any targeted leverage ratio or threshold on debt too?
Kent Draper, Chief Commercial Officer, Iron: Yes. We’re constantly evaluating the opportunities as it relates to both colocation and cloud. I think we’re uniquely positioned in the sense that we are able to take advantage of both opportunities, which we think is quite differentiated to a number of others in the industry. They obviously have very different profiles in terms of the risk adjusted returns. Colocation, longer dated contracts typically in the range of five to twenty years, but lower payback periods often higher than seven years before you can get your capital back.
And in many cases, because of the nature of the debt financing associated with those, there’s, you know, very little actual cash flow coming out of the business during that that finance period. Whereas cloud, you know, shorter dated contracts but much stronger margins and shorter overall payback period. So we typically see around two year payback periods on the GPUs alone and three to four years on the GPUs plus data center infrastructure. So it is something that we’re constantly evaluating and overall we’re looking to maximize risk adjusted returns across both models. I think you can tell from the comments today as it stands, we do find the cloud opportunity extremely compelling.
Anthony, did you want to touch on the comments around financing?
Anthony Lewis, Chief Capital Officer, Iron: Sure. Thanks, Kent. Yeah. Think obviously we have very modest debt servicing requirements today. And I guess as we as we scale the business, obviously, where those opportunities have developed and the nature of the cash flows and the security of those cash flows will ultimately to drive what an appropriate level of leverage is for the business.
So the capital structure will continue to evolve as we continue to grow, but we’ll obviously be focused on maintaining a strong and resilient balance sheet as well as an efficient cost of capital.
John Todaro, Analyst, Needham: Understood. Thank you, gentlemen.
Conference Operator: Thank you. Our next question comes from Darren Aftali with Roth. You may proceed.
Darren Aftali, Analyst, Roth: Congrats on all the progress. A couple, if I may. So on Horizon one and two, I guess there’s commentary in the press release about what theoretically Horizon one could support in terms of GPUs, but you kind of left the door open that there may be other uses. So I’m kind of curious on strategic thinking there. And then on Horizon two, I think my math is right, you guys only have 25 megawatts left at Childress and you’re talking about, I guess, 50 megawatts of critical load.
Will you be borrowing from your Bitcoin business to kind of get there? And are there expansion opportunities beyond that? Second question, I guess on Slide nine, one of your demand partners is FluidStack. I’m more curious on the Neo Cloud side and maybe entity in particular given one of your peers signed a deal with them and another partner there, just kind of what the demand drivers are with FlueStack in particular? Thanks.
Daniel Roberts, Co-Founder and Co-CEO, Iron: Thanks, Darren. Appreciate that. So three questions are here and there. Horizon, we mentioned 19,000. It’s just tick over based on the NVL 72 configuration.
GB three hundreds, The project has been engineered specifically for liquid cooled GPUs. So there is no other use case as an end market other than that. In saying that, there’s a couple of different ways we might monetize that capacity. A is through different types of GPUs. So as we mentioned during the presentation and Kent reiterated, we’ve now introduced the flexibility to accommodate a wider range of frac densities.
We actually discovered building this that the issue is we’re building rack densities that are too dense for where the industry is today. So we’ve had to dial it back a little bit. So accommodating lower rack density gives us the ability to accommodate a wider range of different GPUs whilst preserving the ability to service the Vera Rubins as and when they’re released and potentially beyond that. So that’s exciting. In terms of monetizing the capacity, there’s then colocation versus cloud.
So we may buy, own, operate the 19,000 GPUs, and we’re having conversations with a variety of potential partners for that, including hyperscale customers. We’re progressing financing work streams in parallel. That’s a real option. If the risk return balance is right, as Kent mentioned, then absolutely we’re in a unique position where not many people can build, own, and operate a cloud service. So we’re pursuing that, and we’re excited about that.
But equally, we’re seeing a lot of demand for colocation, and that would deliver more of an infrastructure return on capital and will remain open to that structure. But we want to see a risk return framework that is compelling. And to date, I guess we haven’t yet seen that. In terms of potentially displacing additional mining capacity, you referenced 25 megawatts potentially being displaced from Childress. Look, that’s a cost of business.
As we said seven years ago when we started this business, bitcoin mining will help bootstrap the business, help us build out the data center capacity as and when higher better value use cases come along, then we have the ability and the flexibility to swap those in and monetize our data center capacity differently. In saying that, we don’t envisage stopping building new data centers. We’ve got two gigawatts of Sweetwater. So it may simply be the case where we just reallocate capacity across different sites and perhaps there’s some relocated of mining capacity to Sweetwater at some point, but that’s something we’re working through. Finally, FluidStack.
Yes, we’ve known the FluidStack guys for quite some time. We’ve got a good relationship there. We speak to them. We speak to Google. We know what deals are being done.
We look at the deals. As of today, we find a three year payback on data center and GPU infrastructure Pretty compelling, particularly when Anthony’s lining up 100% GPU financing at single digit interest rates. So look, we’ll remain open to colocation opportunities, but the devil is in the detail and the high level is not always what you end up carrying over a longer period of time. So I might leave that there.
Darren Aftali, Analyst, Roth: Great. Thanks for being safe. Best of luck.
Conference Operator: Thank you. Our next question comes from Joseph Baffi with Canaccord Genuity. You may proceed.
Joseph Baffi, Analyst, Canaccord Genuity: Hey, everyone. Good morning, and congrats on all the progress here in fiscal Q4 and quarter to date. Really great progress. Just really one question for me, maybe just a two part, but a single question. Just wanted to drill down a little bit more on the financing on the Blackwells.
I know that you mentioned there’s some optionality at the end of the lease financing period. I thought maybe we could kind of go into what you’re thinking at the end of those at the end of the lease financing periods, what may just just you know, what may be a factor in having you decide what to do next with those? And then just as a follow-up, you know, it does seem like, at least initially, building your own clusters with this financing does look attractive on a payback and time value of money basis. Just wondering how much financing do you think is available in this market versus the kind of project financing that maybe yourself and others have discussed for a broader colocation type project? Thanks a lot.
Anthony Lewis, Chief Capital Officer, Iron: Yeah. Thanks for thanks for the question. In terms of the you’re probably familiar with the the various types of leasing structures you can see in the market. Some of them are class structured as more classic full up full payout finance leases. Other are sort of more sort of tech rotation style where you you have fixed committed lease payments, and then you have an FMV option to to acquire at the end, often often capped at a percentage of the day one price.
So that obviously allows you the flexibility to potentially return the equipment if we wanted to reinvest in, for example, the next generation of GPUs at that time or obviously continue to own and operate the equipment depending on the conditions the conditions that we see. Sorry. Could you just remind me of the second part of your question?
Joseph Baffi, Analyst, Canaccord Genuity: Just, you know, the amount of financing capacity you see out there on the GPU side versus colocation.
Anthony Lewis, Chief Capital Officer, Iron: Yeah. I think they’re obviously quite different asset profiles. And the amount of, obviously, leverage and the cost of that leverage depends greatly on the on the specific situations. On the on the cloud side, there’s obviously, you know, focus on the underlying portfolio of of customers, the diversity in the in the customer mix, the credit quality, the duration of the contracts that will all drive both the the the sort of pricing and leverage that you can can secure. And I guess similarly on the on the colocation side, obviously, can you can you can obtain very attractive cost of funds and very, you know, meaningful leverage against high quality offtakes such as hyperscale offtakes.
And as you come down the credit spectrum or the duration of the contract, that will obviously flow through into into the cost of the finance and the leverage that you can obtain.
Daniel Roberts, Co-Founder and Co-CEO, Iron: And maybe just to add to that, Anthony, Joe, the two are not mutually exclusive, cloud and colocation, in the sense that we are arranging these 100% financing lease structures, as Anthony’s mentioned, the GPUs. But that doesn’t preclude us then financing the asset base and the infrastructure base at a data center level similar to how you would finance a colocation. It just happens to be the case that the colocation partner is an internalized iron entity. So that market is open. We’re talking to a number of vast number of potential providers of capital for that.
But as Anthony has mentioned, we’re looking up and down the entire capital stack to optimize cost of capital at a group level. So you’ve got these asset level options, but then you’ve got corporate options as well. We mentioned the buoyant convertible note market that continues to look quite perspective. We’ve been prosecuting bond type structures at a corporate level as well. So there’s a whole different array.
And every week depending on level of demand, our revenue profile, how we’re building out different elements of the business, the jigsaw puzzle from a financing perspective kind of falls into place and help support that. So it’s that reflexive wheel of sources and uses of capital, and that’s the benefit of now having Anthony on and dedicated full time to optimizing cost of capital while Kent runs around North America looking to deploy it.
Joseph Baffi, Analyst, Canaccord Genuity: Great. Thanks, Dan. Thanks, Anthony.
Conference Operator: Thank you. Our next question comes from Reggie Smith with JPMorgan. You may proceed.
Mike Power, VP, Investor Relations, Iron0: Hey, everyone. This is Charlie on for Reggie. Thanks for taking the question. Can you talk a bit more about some of the key hires you’ve made in building out the cloud and colocation businesses and where, if anywhere, there is still some room to go? And then as a follow-up, digging in a bit more on the sales side, can you provide a bit more on how you’re getting in front of and winning some of the AI clients, that you called out in the slides?
Thanks.
Kent Draper, Chief Commercial Officer, Iron: Yeah. Happy happy to jump in there on the on the resourcing question. So we’ve been hiring across the stack. As Dan made clear, you know, at a level of vertical integration that we have, you know, we continue to need resources across all areas including data center operations, networking, InfiniBand experts, developers on the software side. We also continue to build out our go to market function so that consists of hiring additional sales executives as well as solutions architects, and we’re also expanding the marketing team in parallel with that.
So there is an ongoing, yeah, level of hiring across the business to support the additional customer facing work that we’re that we’re doing. And sorry. There there was a a last part to your question that I missed. It was breaking up a little.
Mike Power, VP, Investor Relations, Iron0: Yeah. Just more on the sales side, like how you’re getting in front of a client, what are you competing on, why are they choosing Iron, things like that.
Kent Draper, Chief Commercial Officer, Iron: Yeah. So we we get a a mix of inbound and outbound customer demand drivers. We have been active recently in the conference space. So we have been getting out, telling our story, showing why we are differentiated. As I mentioned, we’ve been expanding the marketing team and our efforts there to help drive inbound particularly our activities across all social platforms have been ramping over the past twelve months in particular and we’re seeing a high degree of interest there.
And as that gets out into the public sphere as well as our ongoing provision of cloud services and customer word-of-mouth, we are starting to see more inbound inquiries as well around both our cloud services platform and the potential colocation platform. So it is a bit of a mix there in terms of what we’re seeing.
Daniel Roberts, Co-Founder and Co-CEO, Iron: And I think maybe just to add to that as well, Kent, like, this is exactly the point. The whole demand supply equation in this industry is imbalanced, but there is little supply. So the demand when they need something when people need something, they tend to find it, particularly when it’s scarce. So word-of-mouth through these demand brokers, conferences, existing customers, word does get out. And we do have three pretty unique competitive advantages compared to other competition around Neo Clouds.
Like a, We control the infrastructure end to end. We can scale up capacity up and down across our existing data center footprint, let alone the new footprint and building into that growth. Performance vertical integration is really important because it gives us direct oversight of every single layer in the stack. So we’ve got tighter control over performance, reliability, service, and they get higher uptime as a result because there’s no colocation partners. There’s no SLAs with data centers that restrain and constrain your ability to update GPUs and and get your hands on them.
And then finally, from a cost perspective, we’ve got no colocation fees and greater operational efficiency as a result. So we’re in a really good spot, and this also translates to Salesforce and marketing support and general cloud support because we are in the industry. We are doing stuff. We’ve got available capacity. There’s significant interest in joining Iron because we have capacity to sell.
It’s distinct from other providers who have no capacity and salespeople are sitting there with not a lot to do.
Mike Power, VP, Investor Relations, Iron0: Perfect. Thank you for the question and congrats again.
Conference Operator: Thank you. Our next question comes from Brett Knobelk with Cantor Fitzgerald. You may proceed.
Mike Power, VP, Investor Relations, Iron1: Hi, Thanks for taking my question. Maybe on the cloud services front, is the strategy to go out and order or purchase GPUs with a customer already in mind? Or are you buying those GPUs know, and then trying to find a customer? And then could you maybe just elaborate on the the power, dynamics per GPU? I think the, 19,000 kind of GV 300 for Horizon one implies it can be 380 of them per megawatt of critical IT load.
Do you have, like, maybe a similar metric or so for the b three hundreds or b two hundreds?
John Todaro, Analyst, Needham: If you could provide any color there, that’d be helpful as well.
Daniel Roberts, Co-Founder and Co-CEO, Iron: So I I might take the first half, Kent, if you if you wanna do the the the second half.
Joseph Baffi, Analyst, Canaccord Genuity: Sure.
Daniel Roberts, Co-Founder and Co-CEO, Iron: The the prospect of ordering GPUs before or after a contract, this is the nature of the industry. When companies want compute, they want it now. Like, they don’t want to wait two to three months. You think about an enterprise that’s made the decision. You think about an AI scale up or start up that’s raised a bunch of capital.
Very few companies are in a position where they can plan out and map out a two to three year timeline of GPU needs. Often, it’s we need GPUs. We need them for a project. We need them for today. So the world wants on demand compute.
And we almost use this as a universal motherhood statement to guide what we do. The world doesn’t really want data infrastructure. The world at its core wants compute, and it wants it now and when it needs it. That’s the first element. The second element is I feel like it’s groundhog day.
We’re back in this world, and it takes me back to Bitcoin mining where every man and their dog promises certain amounts of capacity online by a certain date, and no one does it. No one hits the schedules. Everyone revises them downwards, stretches them out, cost blowouts, etcetera. Because the real world’s hard. Dealing with large scale infrastructure projects, large scale workforces, complex project delivery, safety, like, takes a lot of work and systems and structures to deliver that.
This is why we’re in such a good position. We never missed a milestone on Bitcoin mining where the most profitable, if only profitable, would be Bitcoin mining because we did things properly from the start. And we’re now sitting here, and as I said, it’s Groundhog Day with the cloud business, where again, all these companies, neo clouds and otherwise, promise capacity online by a certain date, and they rarely hit it. And as a result, customers get a bit gun shy. So the best thing you can do is continue ordering the hardware.
If it snapped up immediately as soon as it’s commissioned, that’s a pretty good sign that you’re doing the right thing. And as and when we install hardware and the sales cycle starts slowing down, then you know, okay, well, maybe we’ve just got to slow down on the orders. But each incremental order from here is a relatively small portion of our overall risk, so we can afford to take it.
Kent Draper, Chief Commercial Officer, Iron: Thanks, Dan. With respect to the power question, yes, we do continue to see the overall power usage per GPU ticking up with each incremental release from NVIDIA and the other manufacturers. Think using some of the examples of the numbers that were presented early in the presentation on an air cooled basis for b200s, we can fit over 20,000 GPUs into the Prince George site which is 50 megawatts. At Horizon one, fifty megawatts of IT load, you’re looking at around 19,000 GB three hundreds. So, yeah, it’s not exact math there, but it does give you an idea of what we’re seeing in terms of the amount of power per GPU going up over time.
Mike Power, VP, Investor Relations, Iron1: Perfect. Thank you, guys. Really appreciate it.
Conference Operator: Thank you. Our next question comes from Nick Giles with B. Riley. You may proceed.
John Todaro, Analyst, Needham: Yes. Hi, guys. Thanks for taking my questions. I wanted to go back to how the Horizon one capacity will be utilized since you’re closing in on that 4Q completion. So at what point would you make the decision to fill Horizon one with your own GPUs versus pursue a colocation dealer?
Maybe said differently, and I think Dan alluded to this from a financing perspective, but if you were to fill it with GPU, should we expect that to be the case for the entire capacity? Could we see you co locate between your own GPUs and a third party? Thanks very much.
Kent Draper, Chief Commercial Officer, Iron: Yeah. I think that’s one of the advantages of where we’re at is they’re not mutually exclusive options for us. So as we mentioned earlier, we are in a unique position that we can monetize that data center capacity in a number of ways, and it doesn’t have to be ones or zeros. We don’t need to do all of it as cloud or all of it as colocation. It could be a combination within horizon one.
As Dan mentioned, we’ve started building out horizon two. Again, you know, that gives us significant optionality where we could potentially do horizon one under one methodology and one type of monetization, horizon two under another. But what we will continue to do over time is try and maximize the risk adjusted returns for how we monetize the assets. And that may fluctuate over time. We’re in an obviously incredibly dynamic industry here and at different points in time we may see very different risk reward proposition in colocation versus cloud.
But we do have significant flexibility as to how we utilize the capacity.
John Todaro, Analyst, Needham: Thanks for that, Ken. You know, just on the cloud services, you’re focused on bare metal today, but I think you did make some comments that you could expand your software offerings or integrate if needed. What should we be looking for there, or what would the incremental revenue opportunities be if you were to integrate?
Kent Draper, Chief Commercial Officer, Iron: Yeah. Today, as Dan mentioned, the vast majority of the customers that we are dealing with, which make up the majority of the compute market, these are highly experienced AI players, hyperscalers, developers. They are, for the most part, demanding bare metal because it actually suits them better to be able to bring their own orchestration layer. Where we see benefits over time from adding incrementally to to the software layer is being able to serve a slightly different customer class which might be smaller AI startups or enterprise customers who are looking for a simpler single click spin up, spin down type service. But today, yeah, where we see the demand supply imbalance, yeah, that bare metal offering that that we have has a a significant level of demand for it.
And so, yeah, we we feel like we’re well positioned where we’re at today.
Daniel Roberts, Co-Founder and Co-CEO, Iron: And I think I think, again, like, just to reiterate this notion that software is required and these large sophisticated end users of GPUs want a third party provider to staple their own software and make them use it. Like, these guys are sophisticated. They just want compute. They wanna run their own stuff. And at the end of the day, software is eating the world.
We know that. Software is not difficult to overlay. The large customers don’t want your software. They want their own software. And we are hearing it also firsthand from executives and employees at some of these companies that offer their own software, that it’s a nightmare because every time the GPUs change, they need to update the software and rewrite it.
And it’s this constant evolution of code, bugs, rewriting, updating, etcetera, all for an area of the market that, yes, it might seem good as a narrative, but fundamentally and substantially in terms of revenue opportunities, quite small today.
John Todaro, Analyst, Needham: Great. Got it. Thanks for all the color and keep up the good work.
Conference Operator: Thank you. Our next question comes from Steven Gagola with Jones Trading. You may proceed.
Mike Power, VP, Investor Relations, Iron2: Hi. Thanks for the question. As Iron is now recognized as the preferred cloud partner on NVIDIA’s website, I was hoping, Dan and Ken, maybe you could provide more detail on your participation in the, DGX Cloud Leptin marketplace. And, specifically, you know, how do the economics, of working through the the Leptin marketplace compared to maybe operating your own independent cloud offering? You know, what advantages does Iron get from being on that platform?
And, you know, any insights into sort of NVIDIA’s fee structure or take rate for participants there? Appreciate it. Thank you.
Kent Draper, Chief Commercial Officer, Iron: Yes. Happy to give some more color there. So we’re currently participating in the Lepton marketplace. But as an NVIDIA preferred partner, we continue to evaluate platforms like that that could expand how we’re able to get customers access to our infrastructure. So it may offer us broader reach into into developer communities, simpler onboarding.
So, again, to come back to the the previous comments that I made on software, you know, may open up some of the the smaller areas of the market with smaller AI startups and enterprise customers who are looking for a simpler solution. So we continue to monitor this. We are seeing an increasing number of these type of offerings coming to market. And for us, we think it’ll be an additional demand driver for the underlying compute layer that we are providing.
Mike Power, VP, Investor Relations, Iron2: Thank you, Ken. And if I could just ask one more on Horizon one. Is there any, you know, is the growth of your cloud services business, is that influenced which partners you’re willing to consider for colocation potentially at Horizon one given arguably it can be competitors?
Kent Draper, Chief Commercial Officer, Iron: Yeah. I mean, it’s something that we continue to evaluate in terms of the mix. And I think what you’re probably referring to are, yeah, Neo Cloud customers on the colocation side. Now the the majority of Neo Clouds have a very different profile to hyperscalers in terms of colocation. So even within the broader colocation market, there is a significant degree of differentiation.
If you think of hyperscalers, they’re typically looking for longer term contracts, often ten to twenty years, extremely credit worthy, but drive, you know, a hard bargain in terms of the financials and the economic returns that you’re able to achieve. With Neo Clouds, we often see shorter term requirements. So typically, you know, it might be five to fifteen years, less credit worthy than the hyperscalers. So, you know, it’s all something that we factor in in terms of that risk reward element that we discussed earlier. But in in terms of now because we we have heard from a number of people whether, you know, the fact that we’re offering a cloud service limits our ability to do colocation.
I I would actually say quite the opposite. Now most of the colocation customers that we’re talking to significantly value the fact that we understand how to operate these clusters at scale, that we have the data center knowledge. We know how to design data centers to operate these clusters, and we proved out through our own cloud service that we that we can operate them at a very efficient level. So, you know, I I don’t see any kind of conflict there, and it hasn’t hasn’t been a particular issue for us over time.
Mike Power, VP, Investor Relations, Iron2: Appreciate it. Appreciate it, Kent. Thank you.
Daniel Roberts, Co-Founder and Co-CEO, Iron: So just to jump in on the on the Leptin cloud as well, it hasn’t really been live functionally. So NVIDIA’s been working through a number of items in relation to making that available. I think some of it’s now live early access, and we’re in direct conversation with them about integration at the moment. So it is a demand partner that we can absolutely envisage using.
Conference Operator: Thank you. Our next question comes from Ben Summers with BTIG. You may proceed.
Mike Power, VP, Investor Relations, Iron3: Hey, good morning. Good afternoon, guys, and thanks for taking my question. So kind of more on the co location side. Just curious what went into the decision to start developing Horizon two and if that was a lot of potential customers were thinking about potentially scaling beyond the initial 50 megawatts of Horizon one. And then I think kind of more broader picture, as we progress towards getting Sweetwater online, what’s the different customer profile, if any, for more larger scale sites versus potentially just wanting 50 megawatts or 100 megawatts?
And just kind of any color on the counterparties that you’re having conversations with? Thank you.
Daniel Roberts, Co-Founder and Co-CEO, Iron: So we haven’t committed the full CapEx to building out Horizon two. So importantly, over the last seven years, our whole business model has been around cheap optionality. And sitting here right now today, looking at the bigger picture, and I can drill into that, it just makes sense to order long lead items and start moving the ball ahead on a potential commissioning of a Horizon two facility. So a lot of the way the S curve works for CapEx in respect to these facilities is you’ve got long time and smaller cash outlays that build up over time before the larger CapEx commitments come in. So it makes sense to put down deposits on long lead items, get the ball rolling so that we can maintain a really competitive, fast time to power for Horizon two.
Now sitting here today relative to three, six months ago, we’re seeing further validation of a decision to spec a relatively small amount of capital. We are seeing demand take up for AI cloud. We’re seeing the number of inbounds for colocation. We’re seeing better visibility on the overall demand supply imbalance for liquid cooled chips. So it’s a bit of a no brainer, to be honest.
And in terms of committing full CapEx to that, we’ve got time and we’ll just continue to monitor the market live because things are changing week to week in this industry. And that flexibility, having a governance structure that’s founder led, the ability to make quick decisions, work with the Board and adapt to where the market’s going is really important because it is super dynamic.
Mike Power, VP, Investor Relations, Iron3: Awesome. Thank you, guys.
Conference Operator: Thank you. I would now like to turn the call back over to Daniel Roberts for any closing remarks.
Daniel Roberts, Co-Founder and Co-CEO, Iron: Thank you very much. Thanks, everyone, for dialing in. It’s obviously been an exciting quarter and exciting year. We’re thrilled about expanding to 10,900 GPUs in the coming months and really putting our AI cloud service further on the map. But for us, most of our time is is now focused on what lies beyond that.
So expanding our three gigawatt power portfolio, we’re working hard on. That’s exciting. That’s many years away, but it was many years away, the three gigawatts when we started seven years ago. So continuing to position ourselves ahead of the curve in every respect is just critical. And it’s really important when you’re fighting this real world digital world imbalance where digital demand increases overnight, it goes exponential.
Your ability to service that demand with real world infrastructure and compute works in a linear fashion. It’s harder. It takes longer. So the ability to preempt those digital demands and build for tomorrow, position for tomorrow rather than where we are today is a compete key competitive advantage and something we’ll maintain. And it manifests itself in us building 200 kilowatt racks, but the industry can’t support 200 kilowatt racks.
So Horizon one, we’re having to reconfigure to make it smaller. So we’ll continue to keep that in mind. We’re excited about the future. We appreciate all of your support and can’t wait for the next quarterly earnings. Thanks, everyone.
Conference Operator: Thank you. This concludes the conference. Thank you for your participation. You may now disconnect.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.