Two 59%+ winners, four above 25% in Aug – How this AI model keeps picking winners
On Thursday, 05 June 2025, Rambus Inc. (NASDAQ:RMBS) presented at the Bank of America Global Technology Conference 2025. The company highlighted its robust performance driven by strong demand in the server market, particularly AI servers, while addressing potential challenges such as market uncertainty and tariff impacts.
Key Takeaways
- Rambus reported a 52% year-over-year growth in Q1 product revenue, driven by demand in the server market.
- The company maintains a long-term gross margin target of 60% to 65%.
- Rambus is strategically positioned to capture a significant share of the RCD chip market, estimated at $750 million.
- The launch of MRDIMM technology is scheduled for 2026, with a ramp-up in companion chip revenue expected in the second half of 2025.
- The company is closely monitoring indirect tariff impacts, with current operations unaffected due to its supply chain in Taiwan and Korea.
Financial Results
- Rambus has consistently operated within a gross margin range of 61% to 63% over the past three years.
- The patent licensing business contributes approximately $210 million with a 100% margin, remaining immune to tariffs.
- The Silicon IP business, growing 10% to 15% annually, is minimally exposed to China, reducing tariff risks.
Operational Updates
- The server market, including AI servers, is a major growth driver, increasing demand for Rambus products.
- Rambus aims for a 40% to 50% share of the RCD chip market, with DDR5 companion chips adding $600 million to its serviceable addressable market.
- Companion chips are projected to contribute low single digits to revenue initially, with a significant ramp expected later in the year.
Future Outlook
- Rambus anticipates substantial growth in companion chip revenue in 2026, aligning with the launch of MRDIMM technology.
- The market for high-performance client systems is expanding, with Rambus targeting a $200 million market for client clock drivers.
- The company is focused on expanding its Silicon IP business, particularly HBM controllers for AI training environments.
Q&A Highlights
- CEO Luke Seraphine emphasized the positive Q1 product growth and the strategic positioning of Rambus in the expanding AI server market.
- CFO Desmond Lynch reiterated the company’s commitment to maintaining its gross margin targets despite potential market uncertainties.
- The leadership team expressed confidence in Rambus’s ability to navigate supply chain shifts and leverage its technology investments for future growth.
In conclusion, Rambus’s presentation at the Bank of America Global Technology Conference 2025 highlighted its strategic initiatives and market positioning. For a deeper dive, readers are encouraged to refer to the full transcript below.
Full transcript - Bank of America Global Technology Conference 2025:
Duxan Jang, Analyst, Bank of America: Thank you for joining us today. My name is Duxan Jang. I’m part of the US Semiconductors and Semicap Equipment team here at Bank of America. I’m very delighted to host the Rambus team today. Luke Seraphine, chief executive officer, and Desmond Lynch, chief financial officer.
Thank you so much for coming.
Desmond Lynch, Chief Financial Officer, Rambus: Thank you, Dickson. Did you have
Duxan Jang, Analyst, Bank of America: any disclosures that we might have to make? Or are we
Desmond Lynch, Chief Financial Officer, Rambus: Yeah. We just encourage everyone, please, to read our documents on file with the SEC. They they cover a lot more than the company than we will talk about today, Dakshin. Awesome.
Duxan Jang, Analyst, Bank of America: Awesome. I think we can start high level. Can we talk about the state of the union? What are you seeing in the demand environment today, especially perhaps versus the beginning of the year since we’ve had so much ups and downs this year?
Luke Seraphine, Chief Executive Officer, Rambus: Yes. So we’re very pleased with, you know, how the year started for us. We do see some very nice tailwinds, for the server market, both in the traditional server market and the AI servers that drives demand for more bandwidth and capacity in those servers, which, you know, as a consequence drive demands for for our products. So we had a very nice first quarter on the on the product side with 52%, you know, growth compared to the same quarter last year. But we do also see demand for our silicon IP business as people develop custom chips for AI.
They need high level security IP. They need high speed interconnect controllers as well as high speed memory controllers. So the the the the overall, you know, AI environment and the additional traditional server environment is is has been quite good for us at this point in time.
Duxan Jang, Analyst, Bank of America: Awesome. I’ll get back to the silicon IP business, but starting with the product side, as you mentioned, very good quarter in the first quarter.
Luke Seraphine, Chief Executive Officer, Rambus: How should we think about the overall market size, just stepping back, and, if you can talk about the competitive dynamic? Sure. So, you know, the the we traditionally started by building these, what we call RCD chips or buffer chips that are little controller chips that sit on memory modules and work on the interface between the processors and the memories. The market for this chip, we estimate, is about $750,000,000 in size. But the nice thing with the DDR5 generation of products for modules is that in addition to the LCD chip on the module, DDR5 demands that we have what we call companionships, chips that did not exist on the module in the DDR4, the prior generation.
So these companionships add an additional $600,000,000 SAM to the $750,000,000 SAM. And after that, what we see is some of the requirements that we see today in on the server environments are going to be demanded as well in the on the client space. You know, high, you know, high performance client systems are going to require chips that are similar to the LCD chip, and, you know, we believe that we’ll add, a couple of million dollars more of SAM to that. So we do see a SAM expansion that is coming from the fact that there’s more content on the server memory modules, and there’s also an adoption of similar technologies on the client side. What would you
Duxan Jang, Analyst, Bank of America: say are the biggest drivers for this market? People talk about the memory channel, a number of channels, the bandwidth, the capacity. What would be the biggest driver for Rambus?
Luke Seraphine, Chief Executive Officer, Rambus: So I I think the first general comment we would make is that whether it’s an AI server or a traditional server, there is a very high demand for more bandwidth and more capacity. And the reason is that, you know, server technology moves faster than memory technology. So there you have more and more cores on every CPU, every core needs, you know, is dedicated memory. So that drives demand in general, whether it’s an AI server or a traditional server for more bandwidth and more memory. What it translates into for us is that, you know, in the DDR four generation of product a few years ago, we had to develop an LCD chip every other year.
Now today in the DDR five generation of products, we have to develop a new chip every year. So the cadence has been multiplied by two. And as I said earlier, we also have to develop those those those those companion chips. The drivers are really, you know, the growth of AI server, the growth of, you know, traditional servers, the number of channels per CPU. In the DDR4 generation, there were about eight memory channels per CPU.
In the DDR5 generation, it’s a mix of eight and twelve, converging to 12. And we believe that at the end of the DDR five generation of products, your people will probably converge to 16 channels. It means that, you know, you you you have 16 memory channels on each CPU, and then the other driver is how many modules can you populate per channel. Some applications require one module per channels. Other, you know, applications require two module per channel.
So that’s how the market grows. There’s there’s a growth of number of channels and also the number of modules they can you can put on every channel. That’s how
Duxan Jang, Analyst, Bank of America: the market is is growing. Mhmm. Just going back to my earlier question on competition. I know you guys are the leaders in the RCD chips. I think you said about 40% exiting last year.
Your goal is 40 to 50%. What do you think needs to happen for you to reach the high end of the target or even push beyond that?
Luke Seraphine, Chief Executive Officer, Rambus: Yeah. That’s good good question. You know, in the DDR four generation, we started with 0%, and we walked our way up to about 25% share. In the DDR five generation, we enjoyed, you know, a little north of 40% last year because we invested very early in every sub generation of product. And that’s really, really important in that ecosystem because the quality qualification processes in that ecosystem are very complex and take a lot of time.
So if you are the first one to introduce a new sub generation of product into that ecosystem, you actually marshal the resources of every ecosystem member, and they work with you in getting that product out. So we’ve been very good at investing very early in every sub generation of product. And that’s what took us from the 25% share that we enjoyed in the DDR4 generation to the 40% plus share in the DDR five generation. Now the ecosystem for reasons of security of supply, you know, we’d always want to have, you know, multiple suppliers, typically three suppliers. So we have two competitors.
You know, one is Montage, a Chinese company, and one is Renaissance, who bought that business from IDT. And I think the ecosystem, we’re always required to have, you know, this type of of arrangement because these little chips just sit between processes and memory. And if any one of these vendors fails for whatever reason, then you block the whole, you know, supply chain of that ecosystem. So I think we can grow continue to grow our share Our goal is to get to about 50%, but then there’s gonna be some sort of saturation naturally in terms of share.
So we have to count on the market growth, but more importantly, the content growth, you know, as we introduce all of these companionships on the same module. Understood.
Duxan Jang, Analyst, Bank of America: Talking about content, and I know during the first quarter earnings call, you mentioned you’re generally CPU agnostic, whether it’s x 86 or ARM. But how should we think about, just given ARM CPUs tend to be generally higher number of cores, does that benefit you? Or if CPUs like NVIDIA Grace, they use LPDDR, how does that work into your content?
Luke Seraphine, Chief Executive Officer, Rambus: So the two different questions. Whether it’s an ARM core or an x 86 core, we we truly are agnostic. And, you know, if if people you know, what people are looking for is to add more and more cores for reasons that have to do with computational power. But the more core they add, the more memory they have to add. So all of this is good for us.
Whether it’s ARM or x 86, I think we actually welcome that competition. We welcome the competition between the ARM based processors and the x86, and we welcome the competition within each one of these camps because they all drive demand for more buffer chips. With respect to LPDDR, this is a niche market today. You know, LPDDR is typically used in client applications. It brings some benefits, in particular, in terms of power.
That’s why it’s called low power DDR. But it also comes with, you know, challenges that have to do with reliability, with, you know, the you know, the the physical requirements that you have there. Our company, Rambus, has been in that business, you know, for thirty five years. Every leg of our business has to do with memory technologies. So we do have a patent portfolio that covers NPDDR and DDR.
We do have our silicon IP business that has cores in NPDDR and in DDR. And when it comes to products, the vast majority of products today are DDR. If there was a a compelling reason for growing an LPDDR solution on the product side, we would be ready to do that. Understood.
Duxan Jang, Analyst, Bank of America: And then staying on top of this AI topic, we’re obviously seeing a lot of demand moving away from training and more towards inference. Does that also have an impact on your, product cadence or content?
Luke Seraphine, Chief Executive Officer, Rambus: It it it will be another, you know, tailwind for us. Typically, you know, inference systems are simpler than training systems. A lot of things that are currently being used on, you know, GPUs and HBM can actually be run on more standard processors on on the inference side. So that will drive demand for us. The nice thing about this market is that whatever processor you use, because they have to use DRAM on the other side, those DRAM interfaces are standard interfaces.
So whether that DRAM interface is on on the standard processor ARM based or x h t six based or whether it’s on a custom chip that people develop for AI inference, for example, you will have the DDR interface. And on the other side of the DDR interface, you will have a module with that standard product. So all of these are good, you know, tailwinds for us, and and we’re looking forward to, you know, to enjoying,
Duxan Jang, Analyst, Bank of America: you know, the the the rise of AI inference. Mhmm. I do wanna just go back to the earlier LPDDR question just because when we talk to ARM, when we talk to NVIDIA, they obviously have very aggressive outlooks for their gray CPU. So if you were to develop a product on the LPDDR for the server side, how long would that generally take for you to, obviously develop and then ramp? The first thing I would say
Luke Seraphine, Chief Executive Officer, Rambus: is that the current LPDDR solutions are soldered solutions and not on modules. So so, you know, you don’t have, you know, the equivalent today of a buffered chip. Right? So it’s a bit like HBM. You know, today, HBM doesn’t require a buffered chip.
So so, you know, we watch that. But, you know, to the extent that, you know, the market goes into solutions where LPDDR can be reliably integrated on a module as opposed to being soldered, then, you know, the development of a chip would be similar to the development of a buffer chip. So these these developments last, you know, a couple of years. Then the, you know, the the qualification in the market takes time as well. But that’s you know, for us, that’s very similar technology whether it’s a PDDR or DDR.
That’s a very similar environment, these chips that we have to develop for modules. And then module environment is a very specific environment in terms of thermal requirements and noise requirements. So that’s an environment we know well. And the ecosystem is is an ecosystem we know well. I you know, the vendors of LPDDR memories are the same vendors as a DER memory.
The end users are going to be the same end users, so the whole ecosystem is is is very similar. As a consequence, you know, it would take a bus, you know, similar time. But this this push, as you say, you know, it’s a it’s a very interesting concept. But that’s an ecosystem that we’d have to converge on the standard solution because, you you know, every chip has to talk to every chip, and every every one of those chips has to talk to every memory module. So the industry will have to converge onto, you know, a standard solution just as we do today with with with buffer chips, you know, typically through JEDEC, and we are an active member in JEDEC as part of those discussions.
So yeah.
Duxan Jang, Analyst, Bank of America: Got it. And then on to everyone’s favorite topic, tariffs. Yeah. So you said patent licensing is not affected. But on the silicon IP and product side, it’s tougher to gauge the indirect impact.
How should we think about the overall impact today just given, obviously, every day we’re hearing so much more, but compared to, say, at the April when when you reported, I think a lot of the nuances have more stabilized. So how how should we think
Luke Seraphine, Chief Executive Officer, Rambus: about it today? So if you look at our business, you know, our patent licensing business, as you correctly say, is completely immune. These are legal, you know, agreements that are long term agreements with our customers, and there’s no exchange of technology there. So that business is about $210,000,000 1 hundred percent margin. So that gives us a very solid base in terms of protection against tariffs.
The silicon IP business is also not affected by tariffs. We actually provide IP to our companies. Actually, our exposure to China, even with our IP business, is very small as a company. It’s low single digit percentage of our business. So even if there were questions about tariffs with silicon IP and they’re not, then that would be having minimal impact on us.
Then the question is about our product business. Our product business last year was about $250,000,000. We review our situation with respect to tariff almost on a weekly basis, and at this point in time, we are not affected. One of the reasons is that, you know, our your front end supply chain is in Taiwan. Our back end supply chain is in Taiwan and Korea, not not not not China.
And we’re selling our products to the memory vendors who typically buy them in in Asia. So, you you know, this point in time and these products are exempt at this point in time. Things can change, but but, you you know, we’re under these exemptions. So so at this point in time, there is no impact. There might be indirect impacts that we’re watching.
You know, one is if other companies shift their supply chains away from China to other areas in Asia, will this create a supply crunch that indirectly affect us with our suppliers? And the second thing is the overall uncertainty in the market that we are these tariffs going to destroy, I would say, demand. You know? And and but these are indirect effects for us that we’re watching. In terms of direct effects, there’s no direct effects at this point
Duxan Jang, Analyst, Bank of America: in time. Understood. Just going back to the China exposure, obviously, we’re hearing the EDA companies being left out of that market. Would you see would you say that’s also a similar risk for you on the IP business?
Luke Seraphine, Chief Executive Officer, Rambus: There’s always this risk, but we we you know, that’s not something that is new to us. You know, we as as much as we, you know, review tariffs on a regular basis, we also review restrictions, you know, with respect to IP on a regular basis. We’ve been doing this for years well before, tariffs were in place. And at this point in time, we’ve had very, very little impact. And as I indicated earlier, our exposure to the China market is very small.
It’s low single digit. So even if we had a % impact, that would have a low single digit impact on our business. But today,
Duxan Jang, Analyst, Bank of America: there is no no impact. Understood. Moving on to the companion chip opportunities. You launched eight new chips last year. I believe you said you expect about low single digit contribution in the first half.
How should we think about it as we go into the second half? And, obviously, next year, we should see some more of a ramp. If you can either quantify or either qualify or, you know, have some description for us.
Luke Seraphine, Chief Executive Officer, Rambus: Yes. The the you know, as as as we indicated earlier, when the market moved from the DDR four generation of memory modules to the DDR five generation of memory modules, the industry through Genec, by the way, everyone has to agree, the industry decided that some functions that were sitting on the motherboard in the DDR4 generation of products had to be implemented on the memory module instead in the DDR5 generation of products. So when you move from DDR4 to DDR5 on the module instead of having one RCD chip in the DDR4 generation, know, on the DDR five generation, you have one RCD chip, one power management chip, two temperature sensors, and one controller chip, which we call SPD hub. When that transition happened in the market, our strategy was to make sure that we secure the RCD chip market share first because that’s the most complex chip to make. And that explains why we could move from 25% share on DDR4 to more than 40% in DDR5s because we wanted to focus on that.
That transition was extremely strategically important for us because that’s the most complex chip. And then we started to develop our companion chips. The next most complex chip on that module is the power management chip. And in the first generation of DDR five, we were not playing. There were a lot of players.
Actually, a few have survived. A lot of, you know, have not survived. And one of the reasons is that, know, doing a power management chip is one thing. Doing a power management chip in a module environment where it’s very noisy, it’s very tight in terms of real estate, it’s thermally challenged is is is a is a different thing. So we invested in our power management ship team and in house development about more than two years ago.
We’ve introduced our power management ships last year in April, and we have also introduced the other, you know, the other companion chips. Now like everything in that market, you have to intercept a platform from Intel and AMD. You know, that’s that’s how the market works. So these platforms that use our generations of power management chip and companionships are going to start ramping if they’re not late in the second half of this year. So the way to look at it is you were right.
Today, it’s a low single digit portion of our revenue as we ship pre production, qualification quantities. When this platform ramps towards the second half of this year, we’re going to see our share growing and we’re gonna see the bulk of that growth in 2026. We’ve been public about saying that for these companionships, objective is to reach about 20% share at this point in time because the competition landscape is a bit is a bit different. But, obviously, you you will try to do more more than that. Mhmm.
Duxan Jang, Analyst, Bank of America: On your MRDIMM chipset, obviously, qualifications are ongoing. It probably depends a lot more on the customer side when they ramp their products. But what would you say is a realistic ramp timing for Rambus? When would this be more material for you?
Luke Seraphine, Chief Executive Officer, Rambus: Yes. So for for people who don’t know, you know, the the the MRDIMM chipset is it’s it’s a very interesting concept. It’s the idea that on a memory module, you actually double the amount of memory, and you multiplex the access of the memory, you know, onto, you know, the the the memory bus. So what it allows you to do or the the industry to do is with exactly infrastructure, the same CPU architecture, you can picture the idea of removing a standard DDR five module and plugging in an MRDIMM instead, And you’re all of a sudden double the capacity and double the bandwidth. So it has a lot of traction because as I said earlier, people are always looking for more bandwidth and more capacity.
And it had to be, again, the industry had to converge on the exact definition of this MRD. That’s why when we announced it, we say it’s the first JetEgg compliant because that’s that’s a give you security that the industry is going to use it. So as we explained for the companionship, this MRD is linked to a platform launch. And this is a platform launch that will happen in 2026. So we have developed the products.
We have sampled the products to our customers. They’re going through all of their lengthy, you know, qualification process, but the product will ramp with the ramp of the follow on generation of CPUs, which at this point in time is scheduled for the second half of twenty twenty six. So we’re gonna see the initial ramp of those products in the second half of twenty twenty six. Got it.
Duxan Jang, Analyst, Bank of America: And then last one on products. If we talk about the client opportunity, and you’ve alluded to this earlier as well, but, the the clock drivers, how should we think about the opportunity there and its ramp timing? Yes. So why why do
Luke Seraphine, Chief Executive Officer, Rambus: we go there first? I you know, some of the challenges in the data center have to do with, you know, the environment. You have to transmit signals faster and faster between the processor and the memory in a very noisy environment. And it’s it’s very tough to do, especially when you have to double the speed, you know, at such a big at such a fast pace. So that’s why we developed RCD chips on the CPU side.
And the RCD chip is all about what we call signal integrity. It’s about transmitting very small signals in a very noisy environment without losing data. Those requirements did not exist or don’t exist today on the client space. But as client systems become more and more performance, you know, in terms of speed, what we see is that on the high end side of next generation platforms on the client side, we’re going to face similar challenges in terms of signal integrity. And we’re gonna have to have chips that actually reconstruct those signals as we do, you know, on the on the CPU side for data centers.
And that’s that’s what the client clock client driver is. So it’s gonna address, you know, a very small portion of very high end, you know, PCs, if you wish. So the market is gonna be modest. You know, we expect the market to be about $200,000,000 for that. And the ramp is starting now.
It’s going to grow quarter over quarter through 2026. But strategically, what’s going to happen is as time passes, are more and more client systems that are going to require, you know, that signal integrity function. And the client systems are also going to require some elaborate power management functions. And what we see is that we’re gonna we’re gonna see the technologies we develop for the data center, you know, waterfall into the other high end client systems. And with time, more and more of these client systems are gonna use these technologies.
So the CKD is the first one of of of that building blocks that we are building for for the future.
Duxan Jang, Analyst, Bank of America: Got it. Moving on to Silicon IP. Obviously, the HPM market is the one that’s driving. How should we think about your content when the HPM three stack moves from eight high to 12 high? And then on to HPM four, is there an uplift
Luke Seraphine, Chief Executive Officer, Rambus: there? So our silicon IP business, for people who are not too familiar with this, this is a very different business model. So we actually develop memory controller in the case of HBM, memory controllers, and we sell a license to of of these memory controllers to typically semiconductor companies. And these semiconductor companies integrate this into their chips, whatever chips is, maybe an ASIC, it may be a CPU, a GPU, a GPU, a custom ASIC. So what what this means is that we have to develop those controllers probably, between eighteen months and a couple of years.
Have to engage with those customers eighteen months to two years before those chips are actually in the market. So in terms of HBM three and HBM four, you know, we were we’ve been engaged with customers for a couple of years now as in h b m three. We announced h b m four last year, and we were engaged with customers on h b m four last year already. We actually, I think, indicated when we commented on our Q4 results that one of the reasons we had good silicon IP results in Q4 was actually driven by the demand for HBM four controllers. And our strategy on HBM has always been on these controllers to be a little higher in speed and performance than what the market requires.
So we have very early engagements into you know, with with our with our lead customers ahead of what the market needs because we have to be like two years ahead. The size of the stack does not really drive our development, but the speed really drives development. We always try to have to be a slightly higher speed than what the, you know, the market requires, but but the demand for AI training in particular, where you have GPUs using HBM memory drives the demand for HBM silicon IP controllers. You know? And and and what we have to understand is that in a GPU HBM environment, there’s no equivalent of a buffer chip.
There’s not a chip that sits between the GPU and the HBM memory. As you said, there’s a stack of memory, but inside the GPU, there’s an HBM controller that we sell a silicon IP that drives, you know, the the connection to these HBM memory. So it’s been a good driver of our growth of the silicon IP business. As you know, our silicon IP business is about a 20,000,000 a year. We say it is growing 10% to 15% a year.
A part of this 10% to 15% has been actually driven by the demand for HBM over the last couple
Duxan Jang, Analyst, Bank of America: of years. Got it. I know we’re running out of time, but an important question for Tez. As we think about the margin trajectory, q one was a little bit weaker on the product side. You have a lot of different factors going on.
You have the price negotiations, the cost downs, price erosion. So how should we think about the second half outlook and into 2026? You also have the companion chips ramping.
Desmond Lynch, Chief Financial Officer, Rambus: Yes, it’s a good question. I would say on the product gross margin side, we have a long term target of 60% to 65%. If you look over the last sort of three years annual performance, we’ve been operating at 61% to 63% from there, so certainly within our sort of targeted range. So we’re very pleased with how we’ve been able to operate. And this is a healthy margin for the chip business.
What we said is we’ve done a really nice job We’ve been disciplined on the price side as well as been able to continue to make manufacturing cost savings to maintain that sort of margin level. As it relates to the new product contribution, that will be contained within the overall 60% to 65% sort of gross margin target. Obviously, any given quarter, depending upon mix and where the products are within that cycle, the margin can move around a little bit. But I think in the long term, we have a good track record of delivering on the product gross margin side, and that’s something we’ll continue here sort of going forward from here.
Duxan Jang, Analyst, Bank of America: Awesome. I think we’ve run out of time. So thank you so much for coming. You for for the audience as well. Thank you.
Thank you. Thank you.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.