Intel and Weizmann Institute breakthrough speeds up AI models

Published 16/07/2025, 16:42
© REUTERS

Investing.com -- Intel (NASDAQ:INTC) Labs and the Weizmann Institute of Science have developed a new method that makes large language models (LLMs) run up to 2.8 times faster without sacrificing output quality, the company announced.

The breakthrough in ’speculative decoding’ was presented at the International Conference on Machine Learning in Vancouver, Canada. This technique allows any small "draft" model to accelerate any large language model, even when they use different vocabularies.

"We have solved a core inefficiency in generative AI. Our research shows how to turn speculative acceleration into a universal tool. This isn’t just a theoretical improvement; these are practical tools that are already helping developers build faster and smarter applications today," said Oren Pereg, senior researcher at Intel Labs’ Natural Language Processing Group.

Speculative decoding works by pairing a small, fast model with a larger, more accurate one. When given a prompt like "What is the capital of France," a traditional LLM generates each word step by step, consuming significant resources at each step. With speculative decoding, the small assistant model quickly drafts a full phrase such as "Paris, a famous city," which the large model then verifies, reducing compute cycles.

The new method removes limitations that previously required shared vocabularies or co-trained model families, making it practical across different types of models. The technique is vendor-agnostic, working with models from different developers and ecosystems.

"This work removes a major technical barrier to making generative AI faster and cheaper," said Nadav Timor, Ph.D. student in the research group of Prof. David Harel at the Weizmann Institute. "Our algorithms unlock state-of-the-art speedups that were previously available only to organizations that train their own small draft models."

The research introduces three new algorithms that decouple speculative coding from vocabulary alignment. These algorithms have already been integrated into the Hugging Face Transformers open source library, making advanced LLM acceleration available to millions of developers without requiring custom code.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Latest comments

Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers
© 2007-2025 - Fusion Media Limited. All Rights Reserved.