ONNX 模型库
返回模型

说明文档

Qwen3-Embedding-0.6B

<p align="center"> <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/> <p>

亮点

Qwen3 Embedding 模型系列是 Qwen 家族最新的专有模型,专门为文本嵌入和排序任务设计。基于 Qwen3 系列的稠密基础模型,它提供了各种尺寸(0.6B、4B 和 8B)的全系列文本嵌入和重排序模型。该系列继承了其基础模型出色的多语言能力、长文本理解和推理技能。Qwen3 Embedding 系列在多个文本嵌入和排序任务中取得了显著进步,包括文本检索、代码检索、文本分类、文本聚类和双语文本挖掘。

卓越的通用性:嵌入模型在广泛的下游应用评估中取得了最先进的性能。8B 尺寸的嵌入模型在 MTEB 多语言排行榜上排名第一(截至 2025 年 6 月 5 日,得分 70.58),而重排序模型在各种文本检索场景中表现出色。

全面的灵活性:Qwen3 Embedding 系列为嵌入和重排序模型提供了全范围的尺寸(从 0.6B 到 8B),满足注重效率和效果的多样化使用场景。开发者可以无缝组合这两个模块。此外,嵌入模型允许在所有维度上灵活定义向量,嵌入和重排序模型都支持用户自定义指令,以增强特定任务、语言或场景的性能。

多语言能力:得益于 Qwen3 模型的多语言能力,Qwen3 Embedding 系列支持超过 100 种语言。这包括各种编程语言,并提供了强大的多语言、跨语言和代码检索能力。

模型概述

Qwen3-Embedding-0.6B 具有以下特点:

  • 模型类型:文本嵌入
  • 支持语言:100+ 种语言
  • 参数量:0.6B
  • 上下文长度:32k
  • 嵌入维度:最高 1024,支持用户自定义输出维度,范围从 32 到 1024

有关更多详细信息,包括基准测试评估、硬件要求和推理性能,请参阅我们的 博客GitHub

Qwen3 Embedding 系列模型列表

模型类型 模型 尺寸 层数 序列长度 嵌入维度 MRL 支持 指令感知
文本嵌入 Qwen3-Embedding-0.6B 0.6B 28 32K 1024
文本嵌入 Qwen3-Embedding-4B 4B 36 32K 2560
文本嵌入 Qwen3-Embedding-8B 8B 36 32K 4096
文本重排序 Qwen3-Reranker-0.6B 0.6B 28 32K - -
文本重排序 Qwen3-Reranker-4B 4B 36 32K - -
文本重排序 Qwen3-Reranker-8B 8B 36 32K - -

注意

  • MRL 支持表示嵌入模型是否支持自定义最终嵌入的维度。
  • 指令感知表示嵌入或重排序模型是否支持根据不同任务自定义输入指令。
  • 我们的评估表明,对于大多数下游任务,使用指令通常比不使用指令提高 1% 到 5%。因此,我们建议开发者创建针对其任务和场景的定制指令。在多语言环境中,我们还建议用户用英文编写指令,因为模型训练过程中使用的大多数指令最初是用英文编写的。

使用方法

使用早于 4.51.0 版本的 Transformers 时,您可能会遇到以下错误:

KeyError: 'qwen3'

Sentence Transformers 使用方法

# Requires transformers>=4.51.0
# Requires sentence-transformers>=2.7.0

from sentence_transformers import SentenceTransformer

# Load the model
model = SentenceTransformer("Qwen/Qwen3-Embedding-0.6B")

# We recommend enabling flash_attention_2 for better acceleration and memory saving,
# together with setting `padding_side` to "left":
# model = SentenceTransformer(
#     "Qwen/Qwen3-Embedding-0.6B",
#     model_kwargs={"attn_implementation": "flash_attention_2", "device_map": "auto"},
#     tokenizer_kwargs={"padding_side": "left"},
# )

# The queries and documents to embed
queries = [
    "What is the capital of China?",
    "Explain gravity",
]
documents = [
    "The capital of China is Beijing.",
    "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]

# Encode the queries and documents. Note that queries benefit from using a prompt
# Here we use the prompt called "query" stored under `model.prompts`, but you can
# also pass your own prompt via the `prompt` argument
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)

# Compute the (cosine) similarity between the query and document embeddings
similarity = model.similarity(query_embeddings, document_embeddings)
print(similarity)
# tensor([[0.7646, 0.1414],
#         [0.1355, 0.6000]])

Transformers 使用方法

# Requires transformers>=4.51.0

import torch
import torch.nn.functional as F

from torch import Tensor
from transformers import AutoTokenizer, AutoModel


def last_token_pool(last_hidden_states: Tensor,
                 attention_mask: Tensor) -> Tensor:
    left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
    if left_padding:
        return last_hidden_states[:, -1]
    else:
        sequence_lengths = attention_mask.sum(dim=1) - 1
        batch_size = last_hidden_states.shape[0]
        return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]


def get_detailed_instruct(task_description: str, query: str) -> str:
    return f'Instruct: {task_description}\nQuery:{query}'

# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'

queries = [
    get_detailed_instruct(task, 'What is the capital of China?'),
    get_detailed_instruct(task, 'Explain gravity')
]
# No need to add instruction for retrieval documents
documents = [
    "The capital of China is Beijing.",
    "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
]
input_texts = queries + documents

tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-0.6B', padding_side='left')
model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-0.6B')

# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-0.6B', attn_implementation="flash_attention_2", torch_dtype=torch.float16).cuda()

max_length = 8192

# Tokenize the input texts
batch_dict = tokenizer(
    input_texts,
    padding=True,
    truncation=True,
    max_length=max_length,
    return_tensors="pt",
)
batch_dict.to(model.device)
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])

# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T)
print(scores.tolist())
# [[0.7645568251609802, 0.14142508804798126], [0.13549736142158508, 0.5999549627304077]]

vLLM 使用方法

# Requires vllm>=0.8.5
import torch
import vllm
from vllm import LLM

def get_detailed_instruct(task_description: str, query: str) -> str:
    return f'Instruct: {task_description}\nQuery:{query}'

# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'

queries = [
    get_detailed_instruct(task, 'What is the capital of China?'),
    get_detailed_instruct(task, 'Explain gravity')
]
# No need to add instruction for retrieval documents
documents = [
    "The capital of China is Beijing.",
    "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
]
input_texts = queries + documents

model = LLM(model="Qwen/Qwen3-Embedding-0.6B", task="embed")

outputs = model.embed(input_texts)
embeddings = torch.tensor([o.outputs.embedding for o in outputs])
scores = (embeddings[:2] @ embeddings[2:].T)
print(scores.tolist())
# [[0.7620252966880798, 0.14078938961029053], [0.1358368694782257, 0.6013815999031067]]

📌 提示:我们建议开发者根据其特定场景、任务和语言自定义指令。我们的测试表明,在大多数检索场景中,在查询端不使用指令可能会导致检索性能下降约 1% 到 5%。

评估

MTEB(多语言)

模型 尺寸 平均(任务) 平均(类型) 双语挖掘 分类 聚类 指令检索 多语言分类 成对分类 重排序 检索 语义文本相似度
NV-Embed-v2 7B 56.29 49.58 57.84 57.29 40.80 1.04 18.63 78.94 63.82 56.72 71.10
GritLM-7B 7B 60.92 53.74 70.53 61.83 49.75 3.45 22.77 79.94 63.78 58.31 73.33
BGE-M3 0.6B 59.56 52.18 79.11 60.35 40.88 -3.11 20.1 80.76 62.79 54.60 74.12
multilingual-e5-large-instruct 0.6B 63.22 55.08 80.13 64.94 50.75 -0.40 22.91 80.86 62.61 57.12 76.81
gte-Qwen2-1.5B-instruct 1.5B 59.45 52.69 62.51 58.32 52.05 0.74 24.02 81.58 62.58 60.78 71.61
gte-Qwen2-7b-Instruct 7B 62.51 55.93 73.92 61.55 52.77 4.94 25.48 85.13 65.55 60.08 73.98
text-embedding-3-large - 58.93 51.41 62.17 60.27 46.89 -2.68 22.03 79.17 63.89 59.27 71.68
Cohere-embed-multilingual-v3.0 - 61.12 53.23 70.50 62.95 46.89 -1.89 22.74 79.88 64.07 59.16 74.80
Gemini Embedding - 68.37 59.59 79.28 71.82 54.59 5.18 29.16 83.63 65.58 67.71 79.40
Qwen3-Embedding-0.6B 0.6B 64.33 56.00 72.22 66.83 52.33 5.09 24.59 80.83 61.41 64.64 76.17
Qwen3-Embedding-4B 4B 69.45 60.86 79.36 72.33 57.15 11.56 26.77 85.05 65.08 69.60 80.86
Qwen3-Embedding-8B 8B 70.58 61.69 80.89 74.00 57.65 10.06 28.66 86.40 65.63 70.88 81.08

注意:对于对比模型,分数来自 2025 年 5 月 24 日的 MTEB 在线 排行榜

MTEB(英文 v2)

MTEB 英文 / 模型 参数量 平均(任务) 平均(类型) 分类 聚类 成对分类 重排序 检索 语义文本相似度 摘要
multilingual-e5-large-instruct 0.6B 65.53 61.21 75.54 49.89 86.24 48.74 53.47 84.72 29.89
NV-Embed-v2 7.8B 69.81 65.00 87.19 47.66 88.69 49.61 62.84 83.82 35.21
GritLM-7B 7.2B 67.07 63.22 81.25 50.82 87.29 49.59 54.95 83.03 35.65
gte-Qwen2-1.5B-instruct 1.5B 67.20 63.26 85.84 53.54 87.52 49.25 50.25 82.51 33.94
stella_en_1.5B_v5 1.5B 69.43 65.32 89.38 57.06 88.02 50.19 52.42 83.27 36.91
gte-Qwen2-7B-instruct 7.6B 70.72 65.77 88.52 58.97 85.9 50.47 58.09 82.69 35.74
gemini-embedding-exp-03-07 - 73.3 67.67 90.05 59.39 87.7 48.59 64.35 85.29 38.28
Qwen3-Embedding-0.6B 0.6B 70.70 64.88 85.76 54.05 84.37 48.18 61.83 86.57 33.43
Qwen3-Embedding-4B 4B 74.60 68.10 89.84 57.51 87.01 50.76 68.46 88.72 34.39
Qwen3-Embedding-8B 8B 75.22 68.71 90.43 58.57 87.52 51.56 69.44 88.58 34.83

C-MTEB(MTEB 中文)

C-MTEB 参数量 平均(任务) 平均(类型) 分类 聚类 成对分类 重排序 检索 语义文本相似度
multilingual-e5-large-instruct 0.6B 58.08 58.24 69.80 48.23 64.52 57.45 63.65 45.81
bge-multilingual-gemma2 9B 67.64 75.31 59.30 86.67 68.28 73.73 55.19 -
gte-Qwen2-1.5B-instruct 1.5B 67.12 67.79 72.53 54.61 79.5 68.21 71.86 60.05
gte-Qwen2-7B-instruct 7.6B 71.62 72.19 75.77 66.06 81.16 69.24 75.70 65.20
ritrieve_zh_v1 0.3B 72.71 73.85 76.88 66.5 85.98 72.86 76.97 63.92
Qwen3-Embedding-0.6B 0.6B 66.33 67.45 71.40 68.74 76.42 62.58 71.03 54.52
Qwen3-Embedding-4B 4B 72.27 73.51 75.46 77.89 83.34 66.05 77.03 61.26
Qwen3-Embedding-8B 8B 73.84 75.00 76.97 80.08 84.23 66.99 78.21 63.53

引用

如果您觉得我们的工作有帮助,欢迎引用。

@article{qwen3embedding,
  title={Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models},
  author={Zhang, Yanzhao and Li, Mingxin and Long, Dingkun and Zhang, Xin and Lin, Huan and Yang, Baosong and Xie, Pengjun and Yang, An and Liu, Dayiheng and Lin, Junyang and Huang, Fei and Zhou, Jingren},
  journal={arXiv preprint arXiv:2506.05176},
  year={2025}
}

zhlo/Qwen3-Embedding-0.6B-deploy

作者 zhlo

feature-extraction sentence-transformers
↓ 0 ♥ 0

创建时间: 2025-06-08 06:16:34+00:00

更新时间: 2025-06-08 07:26:52+00:00

在 Hugging Face 上查看

文件 (16)

.gitattributes
1_Pooling/config.json
README.md
added_tokens.json
config.json
config_sentence_transformers.json
generation_config.json
merges.txt
model.onnx ONNX
model.onnx_data
model.safetensors
modules.json
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.json