China AI: Bernstein sees chipmakers benefiting from Nvidia scrutiny
Investing.com -- Intel (NASDAQ:INTC) Labs and the Weizmann Institute of Science have developed a new method that makes large language models (LLMs) run up to 2.8 times faster without sacrificing output quality, the company announced.
The breakthrough in ’speculative decoding’ was presented at the International Conference on Machine Learning in Vancouver, Canada. This technique allows any small "draft" model to accelerate any large language model, even when they use different vocabularies.
"We have solved a core inefficiency in generative AI. Our research shows how to turn speculative acceleration into a universal tool. This isn’t just a theoretical improvement; these are practical tools that are already helping developers build faster and smarter applications today," said Oren Pereg, senior researcher at Intel Labs’ Natural Language Processing Group.
Speculative decoding works by pairing a small, fast model with a larger, more accurate one. When given a prompt like "What is the capital of France," a traditional LLM generates each word step by step, consuming significant resources at each step. With speculative decoding, the small assistant model quickly drafts a full phrase such as "Paris, a famous city," which the large model then verifies, reducing compute cycles.
The new method removes limitations that previously required shared vocabularies or co-trained model families, making it practical across different types of models. The technique is vendor-agnostic, working with models from different developers and ecosystems.
"This work removes a major technical barrier to making generative AI faster and cheaper," said Nadav Timor, Ph.D. student in the research group of Prof. David Harel at the Weizmann Institute. "Our algorithms unlock state-of-the-art speedups that were previously available only to organizations that train their own small draft models."
The research introduces three new algorithms that decouple speculative coding from vocabulary alignment. These algorithms have already been integrated into the Hugging Face Transformers open source library, making advanced LLM acceleration available to millions of developers without requiring custom code.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.