Robinhood shares gain on Q2 beat, as user and crypto growth accelerate
Investing.com -- Alibaba (NYSE:BABA) has launched the Qwen3-Embedding and Qwen3-Reranker series, setting new benchmarks in multilingual text embedding and relevance ranking. The series, which includes models designed for text embedding, retrieval, and reranking tasks, supports 119 languages and is available in 0.6B, 4B, and 8B versions.
The Qwen3-Embedding and Qwen3-Reranker series are built on the Qwen3 foundation model, which boasts robust multilingual text understanding capabilities. These new models have achieved state-of-the-art performance across multiple benchmarks for text embedding and reranking tasks. They are open-sourced under the Apache 2.0 license on Hugging Face, GitHub, and ModelScope, and can be used via API on Alibaba Cloud.
The Qwen3-Embedding series offers a range of sizes for both embedding and reranking models, catering to various use cases that prioritize efficiency and effectiveness. The 8B size embedding model ranks No.1 in the MTEB multilingual leaderboard as of June 5, 2025, with a score of 70.58. The reranking models excel in text retrieval scenarios, significantly improving search relevance.
The Qwen3-Embedding series supports over 100 languages, including various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities. The models are designed using dual-encoder and cross-encoder architectures and aim to fully preserve and enhance the text understanding capabilities of the base model.
The training framework for the Qwen3-Embedding series follows the multi-stage training paradigm established by the GTE-Qwen series. This includes a three-stage training structure for the Embedding model and a direct use of high-quality labeled data for supervised training of the Reranking model, improving training efficiency.
As part of future work, Alibaba plans to optimize the Qwen foundation model further to enhance the training efficiency of text embeddings and reranking models. This will improve deployment performance across various scenarios. Additionally, the company plans to expand its multimodal representation system to establish cross-modal semantic understanding capabilities.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.