In 2023, artificial intelligence (AI) technology experienced an unprecedented surge in adoption, earning as enthusiasm for its human-like generative capabilities reached new heights.
Naturally, amidst this breakneck growth, practitioners began facing several problems related to AI models, most notably the issue of “hallucination” – a phenomenon where models generated inaccurate or fabricated responses, posing a barrier to widespread adoption.
But companies developing these models were quick to respond. One of the effective methods they found to fight hallucination is by furnishing the models with “context,” usually derived from vector-embedded raw data and accessed during the answer generation phase (RAG), JPMorgan analysts explained in a new note to clients.
“Retrieval Augmented Generation (RAG) has emerged as an essential aspect of maximizing the potential of LLMs by supplying the required context for the model to utilize, effectively minimizing hallucination while maintaining responses relevant to the posed question, and drawing from current or proprietary data not incorporated during training,” analysts wrote.
Analysts also noted that even OpenAI is recommending the utilization of smaller, quicker models such as “textembedding-003,” slashing the price by 95% for embedding models.
This approach involves encoding and condensing extensive text corpora to enable scanning and indexing in databases tailored for AI functions like Q&A, search, and data mining.
“For example, this Microsoft (NASDAQ:MSFT) project demonstrates the use of word embeddings from LLMs with Azure Cognitive Search to create search experiences that understand relationships between entities in a query and products in a catalog,” analysts noted.
Meanwhile, other tech giants have also been making significant strides with their AI projects.
Notably, Google (NASDAQ:GOOGL) rebranded Bard to Gemini, introducing the Ultra 1.0 model and offering it at a $20/month price for multi-modal app users.
Anthropic followed suit by releasing Claue 2.1, featuring improved accuracy, reduced hallucination, longer context length, and cost efficiency.
Moreover, Apple (NASDAQ:AAPL) joined the fray, unveiling 4M and Ferret, which outperform ChatGPT-Vision with in-context learning techniques.