Synopsys at SNUG Silicon Valley 2025: AI’s Role in Chip Design

Published 19/03/2025, 19:06
Synopsys at SNUG Silicon Valley 2025: AI’s Role in Chip Design

On Wednesday, 19 March 2025, Synopsys (NASDAQ: SNPS) took center stage at the SNUG Silicon Valley 2025 conference. The discussion, led by Synopsys’ Sassine Ghazi and Microsoft’s Satya Nadella, focused on the transformative role of AI in silicon and systems design. While the dialogue was optimistic about AI’s potential to drive innovation, challenges in managing complex workflows were acknowledged.

Key Takeaways

  • Synopsys emphasized the convergence of silicon and systems design, highlighting the need to "re-engineer engineering" through new workflows and partnerships.
  • Microsoft’s involvement underscored the importance of AI in enabling high-fidelity tape outs and innovation across data centers.
  • The collaboration with Microsoft, NVIDIA, and OpenAI aims to optimize EDA products and accelerate chip design.
  • A roadmap was introduced for AI in workflow transformation, from co-pilots to auto-pilots, enhancing autonomous decision-making in chip design.
  • Synopsys launched the HAPS 200 and Zebu 200 platforms, boosting verification efficiency for customers like Arm and NVIDIA.

Operational Updates

  • Synopsys optimized its technology for Microsoft Azure, achieving significant time and cost improvements.
  • Customers adopting Synopsys.ai have reported substantial benefits in managing complexity.
  • Strategic partnerships with NVIDIA, OpenAI, and Microsoft are set to deliver future solutions.

Future Outlook

  • The introduction of "agent engineers" aims to collaborate with human engineers, evolving from assistive to autonomous decision-making in chip design.
  • Silicon Lifecycle Management (SLM) will play a crucial role in in-field health monitoring, especially with the rise of 3DIC.
  • Synopsys plans to optimize processors, including CPU, GPU, and QPU, to leverage AI in evolving workflows.

Conclusion

For a deeper understanding of Synopsys’ strategic direction and collaborations, refer to the full transcript below.

Full transcript - SNUG Silicon Valley 2025:

Sassine Ghazi, Synopsys: Good morning. I was wondering why you were so quiet for a while. They said they’re reading the legal disclosure, so I hope you read it carefully. I’m so excited to be here and welcome you to our thirty fifth SNUG. Thirty five years of this conference and the vitality, the energy is amazing.

So thanks to you. You know, I asked Art, how did SNUG start? It was only three years after the company was founded. We decided, given the technology was truly disruptive, transformative with Synthesis, to bring customer feedback. And you guys love to give us feedback, and we welcome the feedback.

So thank you for keeping that event thriving and strong thirty five years into it. Now this year, we have, as Anne mentioned, a special, colocation of a synopsis executive forum. So you’re gonna see as you’re mingling not only the users, but the users plus the the our customers’ executives, some media, some analysts, investors. So welcome, and I hope you’ll enjoy the next couple days. What an exciting, exciting time to be in our industry.

It’s amazing the number of technology and the speed, the pace in which things are moving. We are in this era of pervasive intelligence where it’s truly promising to deliver incredible disruption and innovations and advancements to humankind. There will be an explosion of new products. Those products will be software defined intelligent systems powered by AI silicon. And what an incredible time to be an engineer.

It’s really such a special time to be an engineer given all the opportunities for innovation and to deal with that pace and complexity of innovation. Now I wanna go through some examples of what is possible. First, imagining the prediction of infection diseases where throughout history, humanity has faced devastating pandemics, most recently with COVID, the disruption on the supply chain, on economy, not to mention millions of deaths. In the past twenty years alone, there has been more pandemics and epidemics than the prior one hundred years. It’s essential that we use technology to predict and understand and prevent pandemics.

Today, innovative companies such as Blue Dot are leveraging AI to analyze massive amount of unstructured data, 65 plus languages, trying to predict diseases and the impact and the spread before it accelerates. Another example is faster drug discovery. According to recent NYU study, the risk of developing dementia at any time after age 55 among Americans is forty two percent. That’s roughly eleven percent of adults age 65 and older have dementia. These alarming statistics underscore the urgent need to find cures.

Already, there are positive signs by leveraging AI and quantum to shorten the drug discovery from ten plus years to two years with higher success rates. I’m confident with AI and technology advancements, we will soon be able to solve these very complex challenges. Now let’s look at few examples that they’re closer and will have an impact on our daily lives. Consider Helix, an AI powered robot, and this is not just any robot. This robot is a generalist vision, language, action model robot where it can, unifies perception, understanding, and learned control.

The historically, the robot is a robot, meaning they perform on hard coded actions. With this robot, it has the ability to, learn, reason, then take actions. This is a fusion of the latest in AI technology and robotics or humanoid robotics and the engineering of the electronics as well as the physical, mechanical aspect and bringing it together with massive workload is an engineering feat. Now few months ago, SpaceX flew two fifty ton, two thirty three foot Super Heavy booster back to its launching pad. This custom built tower caught this plummeting massive object traveling half the speed of sound back to the original launching, tower.

We are witnessing this explosion of new inventions that are AI powered intelligent systems running on advanced silicon. These are very complex and difficult to design. The pressure engineers are feeling today is not only complexity, is complexity and pace by when they need to deliver these products as well, of course, as the cost and affordability. Despite the exponential design complexity, the pace of innovation has been accelerating. To build these AI products, you need highly efficient silicon.

The silicon and systems design world are absolutely converging, compounding complexity and also creating an opportunity for innovation. That’s why we’re talking about how do we re engineer the engineering of these products in order to deliver on these opportunities of the future. When we talk about the era of pervasive intelligence, where AI and smart technologies are omnipresent, interconnected, and seamlessly integrated into the fabric of our daily lives, The increasing software defined intelligence systems and that proliferation of silicon is what we are thriving to provide solutions for you to deliver on these promises and exciting products. This systemic complexity and the relentless race to market is impacting every industry. This is no longer about an EDA challenge or about, physics challenge or mechanical challenge only.

They’re all compounding compounding and intersecting. And more than ever, our engineers are facing truly an exponential challenge. So how do we think differently in terms of our workflows and for us and our ability to deliver these future products? What I call the ingenuity of engineering is when you over constrain a problem and you’re still expected to deliver the step function improvement on the product. There is nothing not not not a better example than in the last number of decades what Moore’s law has achieved, where there are limitations that comes from physics, from architecture, from manufacturing, etcetera, and the ingenuity of engineering truly came together to continue on that pace and rhythm of innovation.

Now we cannot do it alone. We’re working with number of ecosystem partners. And now with the age of AI, we are working from, with companies from NVIDIA to OpenAI and Microsoft in order to deliver to that future that I’m describing. As far as Microsoft goes, we have a long standing relationship with Microsoft. I remember the early conversations with, Microsoft leadership is how do we bring together our EDA product and optimize them on their silicon and on Azure in order to reduce the total cost of ownership for them and for the customer, the semiconductor companies that will be using Azure to provide hardware capacity to design these complex chips.

So today, we have our EDA products already ported on Azure and specified hardware, and I’ll talk about this later, on what are we doing to optimize our technology on the various compute infrastructure and compute architecture. Then the second level of partnership with Microsoft was to leverage the Copilot technology as well as the OpenAI LLMs in order to bring it into our own generative AI assistant for chip design. So I’m honored to have Satya join me here and, have a brief conversation on how do we see the world of, AI, silicon to system, and solutions that we have an opportunity to deliver on. Satya, great to see you. It’s

Satya Nadella, Microsoft: being with you, Sassen. It’s just wonderful to partner with you and, have a chance to chat

Sassine Ghazi, Synopsys: today. Thank you. So just to give you a sense, I don’t have actual statistics, but I’d say roughly, about 80 ish percent of the people you’re talking to are silicon folks in here. Okay? And I’ll say the other 20% are a mix of software and systems.

And I remember the first time we spoke, you made it a point to remind me that your roots come from silicon with your double e background and, of course, your time at Sun where you were more at the system level and, of course, at Microsoft software. But then what an interesting journey to come back right now at Microsoft that you’re you’re very focused on the full stack from silicon to system. So describe to us why and, how do you see as we look ahead these opportunities?

Satya Nadella, Microsoft: You know, it’s fantastic to have a chance to talk to a room full of people who are deep silicon and systems people as a failed silicon engineer. You know what happens. You get to talk to you, right, as opposed to doing silicon engineering. But really, I think it’s just an unbelievable time to see. And it’s to me, it reminds me a little bit of when I started, in the tech industry in the late eighties, early nineties because in some sense, it’s a golden era, right?

At that time, I remember Patterson’s book had come out. Everybody was like, wow, there is a real movement here. And it feels like that to me, where there is a new book that needs to be written on exactly what is happening. If I see even the fleet in Azure today, it looks unlike anything the one that I got started in, like, fifteen years ago. Everything, the cut considerations for the data center design, the power draw, to the network, to the compute, to the storage, and then of course the silicon systems itself.

And so I think that’s what’s happening. So then the question to me, I think that you asked is, what’s driving all this? It’s kind of classic Moore’s Law on hyperdrive. I think at some level, what we are seeing is these scaling laws, are, you know essentially, everybody was about bemoaning the fact that Moore’s Law is ending and except we now have found a new set of s curves. And that’s, I think, the unique thing about our tech industry is it’s not about even one s curves.

It’s about multiple s curves. So you have scaling laws which worked in pre training. It’s not that pre training scaling laws are over. It’s just that we found another scaling law for post I mean, basically test time compute and reasoning. And so both of these are driving what I think are unbelievable capabilities, which, of course, you yourself are using to speed up even silicon and system design.

And we are using it for knowledge work and productivity and software development. And I think that that’s the exciting thing we’re seeing.

Sassine Ghazi, Synopsys: No. Exactly. Now now when we talk about silicon to systems, of course, at the silicon level, as you mentioned, Moore’s law has been a driving force to continue the opportunity of silicon to deliver better performance, better power, etcetera. Now with the complexity of the workloads and AI, we have to think differently. We have to think from the workload down to silicon.

And, of course, as you’re designing the silicon, how do you need to customize it in order to optimize for the software? And I know Microsoft is doing incredible job along that stack. If you can take a minute maybe and describe, what is it that you’re doing and why?

Satya Nadella, Microsoft: Yeah. There I would say there I start from the very top, Sussine, just to kind of give you a flavor for even my own belief on why why is this different. We are building these new, I call it, systems of intelligence. Right? Let’s just take something like GitHub, Copilot.

Right? First, I remember, whatever, three and a half years ago is when I first saw code completions. Right? I mean, software engineers are also as skeptical as silicon engineers that we said, will code I mean, this AI thing amount to anything. Will it really work?

And it started working. So code completions were magical. Right? Because we were working on IntelliSense for decades, and finally, we had IntelliSense in code completion. Then we said, okay.

Can I actually ask AI questions? Right? Instead of going to Reddit and Stack Overflow and copying pasting, can I actually ask? And so chat became the next thing. Then we said, okay.

Can we even do multi file edits in the across the repo? Right? Now we have agents. And now we instead of just even thinking of, pair programming, we now have peer programmer with three agents. So that’s a complete intelligence system, which is essentially what you’re gonna do for silicon design.

Right? So in some sense, those are the new applications. Because when I think about silicon design, like, as customers of yours inside of Microsoft, we have to be able to do tape outs, a zeros every year with sort of absolute high fidelity. And that ain’t gonna happen if we don’t have breakthroughs in the tooling that our engineers use. That then is leading to the foundational rework of the data center, to all the components in the data center.

And that’s where, for example, my smart NIC, my DPU, my AI accelerator, all have to be designed together to support the training and inference workloads going forward. And that’s, I think, the exciting part. There’s a system architecture that’s changing. The workload itself is changing. And the coupling between those two is what, we’re all sort of grappling with, quite frankly.

And it’s great to see the innovation that you’re bringing to us, we are bringing to you. And both ways, I think we need each other.

Sassine Ghazi, Synopsys: Exactly. I mean, that’s that’s why we are so excited about what we’re calling reengineering engineering because you have to think differently in order to design these complex interconnected systems. Now with Microsoft, we started the Copilot journey with great successes. As you know, Synopsys has thousands of software developers where they started seeing the amazing benefit of, having an assistant. Now we’re moving it into the more sophisticated LLMs with what we’re calling agent engineers.

And I know you’re very passionate about that and respecting your time. Any thoughts as you’re thinking about the future of agents orchestrating multiple agents to solve these challenges?

Satya Nadella, Microsoft: I think that that’s a that that is the phase we are in. Right? If you sort of say that it started with more things like completions, we then went to chat, and now we are giving agents the task. So in some sense, in the first phase, it was more we were asking questions and we were doing the execution. In this next phase, we are going to give instructions and AI will do the execution, if you will.

But we’ll still be in the loop, and that I think is what is important for us as engineers, whether it’s on the silicon side or on the software side. Because at the end of the day, the the abstraction level goes up, but the understanding of the system still, I think, is going to be very, very important for us to be able to create great engineered outputs. That’s, I think, the exciting part. The other thing that I would say is this reasoning capability, the big change in the last year has been, it’s not just even having a very capable pretrained models. In an interesting way, there’s lots of pre trained models that are fairly capable.

And it’s showing that if you have a sufficiently large pre trained models, the trick is really about how do you teach it reasoning for a given task. Right? So in your case, what does it mean to teach it to reason over silicon design? Something like, you know, you and I talked the last time, which is the type of optimization you do between power and performance and area, that’s a reasoning task that we have sort of had previous algorithms. So the question is, can you teach using RL and other mechanisms, a core model, that thing?

And so that to me is the place where I think a lot of interesting product capabilities and model capabilities are getting intertwined. And that’s, I think, the exciting phase we are in.

Sassine Ghazi, Synopsys: Exactly. Now, just closing remarks from your side. You mentioned that software engineers can be as skeptical as hardware engineers because I wanna talk about it later. But any advice you have, given the pace in which innovation and technology is moving?

Satya Nadella, Microsoft: It’s a great question, Riz. So I think what’s happening, for us, Sussin, is even like, when I look at the core workflow inside of Microsoft, in spite of massive technical changes or platform shifts, right, we went from client only to client server to the web to cloud and mobile, the core workflow has remained stable, quite frankly. You know, we we changed a little here, there. You know, we kind of have fancy things like DevOps today and blah blah blah, but nothing really at the core change. But this is the first time I feel the core workflow itself may change, in the sense of if you think about it, right, at LinkedIn, just to give you even a feel for what structurally we’re doing, is we now have a new role called a full stack builder.

Because if you think about it, we have now put these powerful tools, where a designer, a product manager, and a front end engineer can all come at it as full stack product builders. So why not increase the scope even for these roles? So I think one of the interesting things for us and the same thing is happening between, like, take one of the things OpenAI quite frankly taught us was there is no distance anymore from what we would consider AI science and a workload or an application. That was the magic. Right?

And so to me, even thinking of what is science to product to engineering, that is the place where I think whether it’s in your company, our company, or anyone who is in the audience, I think we’ll have to fundamentally get down to what is the outcome, how do we really achieve that outcome by streamlining our work, work artifact, and workflow to drive that outcome faster and more value to our customers versus status quo.

Sassine Ghazi, Synopsys: You made my job easier for the rest of the keynote, Satya. Thank you so much for the partnership, and thank you for joining us this morning. Thank you.

Satya Nadella, Microsoft: Thank you so much, It’s been my pleasure.

Sassine Ghazi, Synopsys: Thank you. So as you’ve heard, the complexity of bringing these multiple discipline of optimization to achieve the the the schedule and more importantly, the differentiation is something that is accelerating at at a speed that I have not seen in my last twenty seven, twenty eight years in the industry. So, as the old saying goes, necessity is the mother of all invention. Today, the need to deliver on this pervasive intelligence with that increased complexity and pace, and you’re gonna hear me talk about complexity and pace throughout the discussion, we need to rethink how intelligent systems. So for the next remaining of my presentation, I have really three sections.

What is an intelligent system and how do we rethink on how to design these intelligent systems? We go into silicon and the key technology in silicon to support these intelligent systems. And lastly, our vision and roadmap for AI to change the workflow and how things are being done. So what we saw earlier with SpaceX with the robot and when you think of an autonomous car or drones, etcetera, these are intelligent systems where there’s massive amount of software and with the AI workload to drive the application. And those are very specific application that you need the silicon that is customized in order to drive that efficiency across the stack.

If you take a closer look at a drone, and here you’ll see the complexity of what we’re talking about, You start with the workload, which is software and AI that is expected to be autonomous. It must, understand, avoid objects, both static or flying. It must communicate with the operator. The entire system must be built to support this workload. Now a lot of complex, of course, software and AI models, At the same time, the software must control the mechanical aspect of the drone, the actual motors, the the battery, and that’s an electrical.

So you’re going from electronics or or the software to the actual physical, drone, and the silicon that is optimized to make sure it’s efficient in terms of, latency, power efficiency, etcetera. So that’s the electronic system that is connected to an electrical system and as well as when you start thinking about the physics, the aerodynamics, the type of material you need to use in order to deal with the stress as well as the reliability of that drone as it’s operating. Now if you’re a system engineer and you’re thinking about designing this complex, drone, you have to be looking at not only the individual engineering domains, but you need to have an understanding of the cross domain. And in order to do it, you have to start thinking about how do I virtualize with high level of, fidelity to design that system. Now the other thing that is important is these systems are not operating in isolation.

They’re often interconnected, meaning one system or one drone is operating and interacting with another drone. Now I wanna show you a really cool display where, about 10,000 drones were operating to do a show, and they were all controlled from a single laptop. And even there’s more complexity. These drones were not operating in a lab. They were operating in a real world environment, and that brings an immense complexity.

Same as what we talk about autonomous car. How will it operate on the road in a real world environment? That multidisciplinary interconnected system engineering, you have to get it done right the first time. Otherwise, the cost of developing this system is is very challenging for a company to survive if you don’t have the right methodology and workflow. Another example of an intelligence system is actually a data center where you have a very specified workloads that needs different optimization from silicon all the way up to the system In order to drive efficiency, I mentioned earlier the Azure example where we are optimizing some of our technology where they need massive amount of compute to run, where we’re seeing significant improvement in the time and cost due to that optimization that we’re providing at the compute infrastructure level.

So just like these examples, these domains are, again, multidisciplinary and bring together different engineering that we need to take into account. This is where digital twin comes in. And I know digital twin as an industry, we’ve been talking about it for a while, but it’s essential given the complexity on how to simulate in real time and analyze and optimize at that system level. Now to build an efficient data center, you need to model the workload on the silicon devices that don’t yet exist. Today, we have actually customers running their LLMs on our accelerated prototyping platforms.

They are co developing their LLM and the silicon for the target workload that they are designing. Power is a critical component. Then how do you optimize for power for that specific, application and in this case, a data center? Synopsys actually is the leader in electronics digital twin. We’ve been talking about EDT or electronics digital twin for at least three to four years as we started engaging deeper with the complexity of automotive and autonomous driving.

The digital twins itself needs to model both the electronics and the surrounding environment. Now in the case of automotive, we have to partner with the ecosystem that they have other part of the modeling that needs to come in with the chip virtualization and electronic system and think about what other way you can validate an autonomous car without having this digital twin capability. Actually, if you take a look at the digital twin in action here, this is where Synopsys was able to virtualize and model, the control system and the zonal and compute ECU to communicate with each other. And that model is executed with our technology called Silver and Virtualizer. In this particular case, the example you’re seeing, that was a partnership with IPG CarMaker in order to bring in the vehicle dynamics and the surrounding physical world.

So what we provided was the electronics virtualization, and IPG CarMaker brought in the surrounding physical world into this example. Now during the execution, the software development, the testing team can observe the behavior of that silicon into the environment for the specific workload they are building. Now that does not only apply to cars. That’s again back to the intelligent systems, drones, data centers, etcetera. They all benefit from that virtualization that I’m talking about.

Now if we bring it closer to silicon, a three d IC or advanced package is a sophisticated complex system where you need to take into account not only the electronic design. You can argue the electronic design in this case is is understood. But the moment you start stacking these, chiplets into this advanced package, you’re dealing with a whole other slew of challenges, be it thermal, mechanical, fluid, structure. How do you think of that system when you’re designing it and not solving the problems when it’s too late? About eight years ago, when we, decided to collaborate with NSIS, we could envision that the need given where Moore’s law is at and the ability to go beyond the radical size, you’re gonna be limited by by that physical effect.

The stacking of chiplets and bringing multi die into advanced package becomes essential as part of the solution we needed to provide to customers. And today, we’re proud that we are able to enable mutual our customers to, mutual customers and Synopsys customers to deliver to these complex advanced packaging and multi die systems. Now I wanna double click into, right now the silicon side of the key factors to continue on momentum on innovation, but it goes back to the same thing. What our silicon folks are dealing with is complexity and the pace in which they need to design these hundreds of billions of devices. And actually, customers are already talking about trillions of transistors bringing them together in one package.

While the schedule, there’s a race to go from an eighteen months tape out to sixteen to twelve months or below to deliver this customized silicon for these intelligent systems. Now how do you deal with that? And as as the complexity from technology on a single die, we’re talking about the GAA, we’re talking about an angstrom in order to design that silicon, and then you bring it together, into an advanced package. There are I wanna walk through six key technology factors that I want us to think through in order to deliver to these advanced silicon. First, I’ll start with the advanced packaging.

Three d IC is the only way you can scale to the hundreds of billions and to the trillions because there’s no way you can put these things in a monolithic fashion. Now the moment you start scaling to that level of complexity, you can only achieve, the performance or power by being efficient at the interconnect level and how to architect that that multi die system efficiently. And in most cases, dies may be coming from different process technology and different foundries. Then how do you verify and validate an architect in order to deliver to this advanced, package? Now interfaces becomes essential, interface IP, where the only other alternative, if you have a monolithic die sitting on a PCB where you still have and need an IP to connect those multiple monolithic chips together, which leads me to the second, opportunity I wanna say, which is the advancement in IP.

So the first one is the three d IC to architect that system and how to bring the right, choices and optionality you need. Then the next challenge is the IP to interconnect this advanced system. The big we are fortunate that we are in a leadership position in IP, meaning that we work with every customer that they’re thinking about either a monolithic or multiple multi die together. One of the things that we’ve observed actually over the last three to five years is the pace in which those standards are evolving. Where we used to design a standard and it used to to be valid and viable for our customers for four, five years, now that time has shrunk significantly.

Actually, one of the best example, if you think about 2018, we were talking about two gigabit per second interconnect. And that has been growing at about eighteen months doubling at about eighteen months where in 2024, we reach 32 gigabit per second with an expectation to be at 64 this year. So that pace in which this complexity of these interfaces is going is truly exponential. The second layer with IP and when you think of advanced package is HBM. HBM is another key driver in order to bring together these advanced systems.

As you look at the HBM, the DDR, the PCI Express, the Ethernet, all these interfaces has the evolution has never been as fast as what we’re seeing right now. And it’s not by accident that you’re seeing these hockey sticks are going exponential. AI is the driver. There’s an application that is driving these interfaces to be at that pace and acceleration as well as the complexity. Now going into the next layer of complexity as we, look at the advanced systems is the actual advanced nodes.

And I know when people make comments that Moore’s law is dead, then you see customers that they’re designing the most advanced AI silicon still pushing not only foundry from a capacity point of view, but from a technology point of view in order to keep up with the angstrom, march. And the reason for it, it does deliver performance and power efficiency that is needed. It wasn’t too long ago where we talked about seven nanometer as an advanced technology. Now most of these advanced AI chips are below the seven nanometer, and what we called, the the March two Angstrom. And we’re fortunate.

Synopsys is very fortunate, actually, that we work with very early, step of the process technology development with technology we have like TCAD or OPC, where we are with the foundry in the very early r and d stage at the device and the process modeling and simulation level and the complexity and the art of what we’re doing with the laws of physics is truly an engineering feat. It’s unbelievable what’s happening, but that March is continuing. Now with the leading fabs, to develop and productize their next node is not only an EDA investment. It’s an EDA and IP where you have to make sure that that IP is designed not only on one foundry, on multiple foundries to give our customers that optionality. When you’re thinking about multiple dies sitting in an advanced package.

They may come from different foundries in order to bring them together, and this is where our investment in IP and the roadmap to keep up with not only the interface acceleration, but interface acceleration and the designing of that IP across multiple foundries. We talked about three d IC, IP, advanced node, massive complexity, then obviously the next step is how do you verify that complex system? And when you have quadrillions of cycles that you need to validate And here in our world, we cannot have it’s good enough and less tape out because the time and the cost is so significant that verification needs to evolve. Actually, synopsis has been for at least ten years talking about the verification continuum where you start with continued acceleration with VCS and how do we, evolve at a different level of abstraction, different level of speed, of capacity from VCS all the way up to, HAPS for virtualization, and you can go further up to, virtualizer to virtualize. So that continuum is essential in order to drive, that innovation.

Actually, a couple weeks ago, we announced our HAPS 200 and Zebu 200 platform where many of you were in, in the launch, and we had both Arm and NVIDIA talking about how it’s helping them improve their verification efficiency and cycle of these complex systems that they’re building. Now, of course, AI has opened the door for different way to deal with that verification complexity. In the car example I showed earlier, where we had a virtualization of many parts of the silicon before the RTL has even been written, and as you get to the maturity of the RTL is ready, the way you go to VCS, to Zebu, to HAPS, and how do you bring that continuum together in order to validate, this complex SOC. Now with advanced verification and IP, we’re able to shift left the design cycle, which is, of course, essential to deal with that schedule shrinking. Now we need to stretch the verification cycle not only in from the p silicon to post silicon.

How do we bring it all the way to the in field and ensuring that when the product when the the end product is sitting in a car or a drone and is operating in real life with the real workload that is reliable, and if there’s a failure, what’s causing the failure as it’s operating in the system? We we call this SLM or Silicon Lifecycle Management. With SLM, the initial thinking was for in field health monitoring. How do you insert monitor sensors at the chip, but then you monitor the health of the end SOC as it’s sitting in the field and you have workload running on it? With three d IC, it brought a whole new opportunity and element.

In talking to some of the leading, packaging manufacturing companies out there, one of their big fear with going broad with three d IC as you’re putting multiple dies and chiplet into an advanced package and that package is running in a car or in a data center, and let’s assume one of those chiplets is overheating when there’s a specific workload running and it’s causing the failure or the warpage, the crack of the die that’s sitting above it, How do you monitor these things when the workload is running without having that capability? So SLM is not only about taking these monitor sensors and watching how the the SOC is running in the field from a monolithic standpoint. But with three d IC, it will become essential to have that capability early in the, in the process. Now the last of the six factors I talked about is EDA, and how do we bring the advanced EDA to have a convergent flow and be able to deal with that angstrom march as well as the rest of the elements that I just described. Now for, from systems architecture, digital analog design flows to sign off, to test, to manufacturing.

How do you enable all these tools to come together in a hyperconvergent way to make sure you have a predictable and convergent flow and an outcome to reduce the number of iterations and discovering issues later in the flow. And, of course, we build AI everywhere and every opportunity that we have in order to accelerate the task. We were the pioneer with bringing reinforcement learning starting in 2017 in order to tame that complexity in every part of the flow that is needed and necessary. So those are the six technology areas that are needed to deliver on state of the art silicon. And as I said, we cannot do it alone.

Many deep collaborations with foundries, with OSAT, with IP partners journey. And actually, I wanna start with, Satya’s point that sometimes engineers can be skeptical. And I urge you to put that skepticism aside because you’re not doing yourself or definitely not your company or your team a favor if you’re not adopting rapidly the technology that is needed in order to change the workflow given the complexity that we’re talking about. In many discussions with customers, what we’ve delivered so far, and I’ll walk through what we have today and customers are using with Synopsys.ai, They see a tremendous value, but what they say at the same time, it has not changed my workflow, meaning it helped me deliver on the complexity. We call it taming that complexity.

We appreciate it, But the the pressure to do something different in order to deal with that exponential of complexity, we have to think different. And this is where we believe AI is gonna change the workflow. And let me walk you through the journey of where we are and where do we see the world going with AI. So first with Synopsys dot AI, this is where I said we pioneered with reinforcement learning in 2017, bringing it into the physical implementation space, bringing, DSO dot AI to work collaboratively with fusion compiler in order to optimize the many input and the large space of optimization and deliver to the best PPA in the shortest time possible. Then we started talking about our data continuum with the data analytics, with design dot d a, fab dot d a, silicon dot d a.

How do you stitch together insights on what happens at the next step of the flow? The the results are amazing. And I remember around 2018 time frame, the team, the r and d team came and were very excited with the prototype they have and running a number of the customer designs with fantastic results. And going to customers trying to convince them to use the technology, Partially, there was skepticism, but the other part was there was a confusion. How do I use this technology in my workflow?

My engineers are structured a certain way that they’ve optimized for two, three decades. How do I evolve? Now I hope none of you are doubting that you need to use the Synopsys dot AI technology in order to boost the outcome and the productivity that that that you need to deal with that complexity. Now next, I wanna move to generative AI. In generative AI, think about it today, the way we’re describing it.

You have a co pilot and you have assistive and creative. Assistive is actually what we talked about in terms of Copilot technology, that we started with Microsoft, where you have a workflow assistant, knowledge assistant, a debug assistant that you can ramp up a junior engineer in a much faster way as well as an expert engineer. They can interface with our product in a more modernized, effective, efficient way. Then you have the creative element, which is, a number of examples here actually where we have early customer engagement from RTL generation, test bench generation, test assertions, where you can have a copilot that help you create part of your RTL test bench, documentation, test assertion. And I’ll show in a moment the journey of maturity and where we feel the technology is gonna be, six months, nine months from now, two years from now, and how it will evolve.

Same thing. The results that our customers are seeing from both the assistive and the creative is actually fairly impressive. And it’s not surprising because when you’re modernizing the way you’re doing the work, you are getting to some truly impressive, results compared to a human engineer working the same approach or method as it was done before. Now in the creative solution, this is where the productivity booster can be fairly significant, where you can go from days to minutes. I wanna remind us that in our case, we cannot have models that they hallucinate.

You cannot have models that say that was good enough. Let’s go. We are very deliberate by when and how do we engage with our customers to make sure that the maturity of what we are offering is actually acceptable without without putting any, part of your workflow at risk. But it’s an important point to, to point out. Now as AI continues to evolve, so will the workflow.

I often get asked the question from, primarily our investors stakeholders on, when do we see a change in the EDA as a market by leveraging AI. And I don’t believe that will be the case unless your workflow would change, meaning you can do certain things very differently in order to, for you, the customer, to deliver on your product road map in a faster, more effective, more efficient way. Now with the agentic AI era, I would like to introduce the concept of agent engineers, where agent engineers will collaborate with the human engineer in order to tame that complexity and change that workflow. This is where we have a deep collaboration with Microsoft, NVIDIA, and others on how to build these agents specifically for the semiconductor market. And within the semiconductor market, how do you have specialized agents for part of your workflow?

As you look at this, chart, think of it as our roadmap and vision of how do we go from the synopsis dot d a synopsis dot a I, the dot d a, the data analytics, and then as we’re looking at agent engineers and the agents of the future. What you see on the x axis is the evolution from co pilot to autopilot. And from the bottom up or the y axis is how do you build that capability? And that’s a cumulative capability you have to start building and layer on top of each other from generative to agentic. First, you start with assisting, and this is where we put big energy and effort over the last couple years to bring, the co pilot capability into each one of our product.

These LLMs are trained and specialized LLMs for each one of our products. So that’s the first step is assisting. The next step, you go into, acting where you have agents that are specialized for a specific part of the flow. As I mentioned, RTL generation, you’re gonna have an RTL generation agent, test bench generation agent, test assertion, etcetera. And these action agents, of course, will improve over time because they will be learning and improving based on the design and the environment that you’re running, which will be, of course, different for each customer.

The next level is how do you bring the multi agents together and orchestrate these tasks? Then you go into dynamic adaptive learning where you optimize based on your own workflow. So the first few steps are existing workflow, but you’re building agents to operate within an existing workflow. Then the workflow will start changing as you move into the orchestrating and planning steps of the flow with the ambition to get to a, the model or the agent framework will be able to take autonomous acting and decision on the entire chip, part of a chip, as the the the technology matures and evolve. As you look at this, I would like to draw a parallel to autonomous driving.

I’m sure all of you are very familiar with the l one through l five for autonomous car, where l one through l two ish l three, you have a human monitoring the road to a system monitoring the road. Let me walk you through the similar levels and what is it that is available today and how do we envision from an l one to l five for the agent engineers. So think of l one as the copilot of today, which we give it the ability to assist engineers to create files using LLMs. The moment you move to l two, you start acting on specific areas of the workflow. For example, you can ask the agent to fix a lint error, to fix a DRC violation.

They are empowered to act on a very specific part of the workflow with a human engineer collaborating with these agent engineers. As you move into l l three, that’s where the multi agent orchestration becomes very important. How do you orchestrate different agent types so you start creating an opportunity to solve the problem across the domains? For example, to fix signal integrity violations, You can you need multi agent orchestration in order to fix a signal integrity type of a challenge, or close timing for me. It takes a different agents in order to achieve that that that ask.

L four, you start doing the planning and adaptive learning, which will allows the agentic solution to assess the quality of the results, refine the flow, and this is where the workflow will start adjusting and changing to improve its own workflow, not the same workflow that we started with at an l one or an l two. In an l five, this is where we feel the term autopilot is appropriate, which will add a high level of decision making capability. This is where the entire multi agent system has the capability to fully and autonomously reason, plan, and take actions to achieve that higher level, outcome. Now some of this, you you may be wondering, and I know we’ll we’ll have number of sessions today, tomorrow. Where are we?

This on an l one and l two level, we have number of engagements with number of customers, and that technology, of course, will continue on maturing and building. Back to the ADAS example, it’s not like when you reach an l two or you’re designing on an l three, you stop touching an l two. Those will go and continue ongoing and evolving as you get to the next phase and next level of maturity. Now back to there are skeptics, I think we have at least a cheerleader, in the where Jensen would like to rent millions of synopsis agents in order to achieve, the and tame that complexity of chip design of the future. Same discussion as Satya mentioned, that not only for chip design, for every industry, The workflow will change.

There will be collaboration between a human engineer with an agent engineer for various industries and product applications. And we’re, of course, very excited about this opportunity, and we look forward to engaging with you as we as we evolve and build the roadmap from a co pilot to an autopilot. Now as I wrap up, there are not only the optimization there are not only the optimization I described at the system level, silicon level, and AI level, there are other opportunities that we need to consider. When you see that those technology horizons, there is the workflow, which is the top layer that we just talked about, engines or solvers, and then the compute. At the workflow, that’s what we just described, the l one through l five with agent engineers.

Then you move into within agent engineers, continue on evolving it to go from a sub block to a bigger part of the the SOC to the entire, chip. Below the workflow sits the actual engines. Imagine you have an agent that is a timing agent. They will need to work not only at the prime time shell level. Is there an opportunity to go and optimize at the engine and solver level?

And the answer is yes. So how do we continue on evolving the engines and the solvers, not only at the electronics level, as we expand the portfolio at the multiple levels from electronics to electric to mechanical, etcetera. And this is where the digital twin becomes even more practical and scalable and accurate to use. Oops. So at the compute level, we’ve done, I believe, a very good job as an industry with every opportunity.

I moved too fast. Can you go back one slide, please? That we did a very good job as an industry to optimize from a CPU to a GPU, multiple flavors of the CPU, multiple flavors of the GPU. And it’s not a simple port, port from this CPU to that CPU. It’s a multi it’s it’s an optimization opportunity where we see twenty, thirty, 40 percent improvement in compute from a CPU to the next CPU.

As we optimize on a GPU, you’ve seen most likely in the last few days and definitely in the last few years when we talk about 10 x, 15 x, 20 x improvement. And as you look ahead, possibly with the qubit, with the QPU, what are the opportunities to continue on optimizing at that bottom layer, which is the compute, then the engine, then the workflow. Now, as we’re wrapping up, I talked about couple concepts, the need to re engineer engineering and the agent engineers and collaborating with the human engineer to change the workflow in order to deal with the complexity and the pace of what we’re building. You know, I’m very thankful and passionate and appreciative of the Synopsys team that is so committed to our customers and to a sustained innovation that they come in every morning with enthusiasm to drive it. But what makes us even more excited and happy is when we see our customers using our innovation and technology to deliver your product and truly changing the way humankind will live.

We’re at the center of it. So our mission is empowering innovators to drive human advancement. With that, big, big thank you, and enjoy the show, and I look forward to interacting with you. Thank you.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers
© 2007-2025 - Fusion Media Limited. All Rights Reserved.