说明文档
jina-embeddings-v5-text-small-retrieval 中文 README
<p align="center"> <img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px"> </p>
jina-embeddings-v5-text-small-retrieval:面向检索的嵌入蒸馏模型
Elastic 推理服务 | ArXiv | 发布说明 | 博客
模型概述
<p align="center">
<img src="https://jina-ai-gmbh.ghost.io/content/images/2026/02/v5_architecture_1771470917.png" alt="jina-embeddings-v5-text 架构" width="600px">
</p>
jina-embeddings-v5-text-small-retrieval 是一款紧凑、高性能的文本嵌入模型,专为信息检索而设计。
它是 jina-embeddings-v5-text 模型系列的一部分,该系列还包括 jina-embeddings-v5-text-nano,这是一款面向资源受限场景的更小模型。
jina-embeddings-v5-text-small-retrieval 采用蒸馏与任务特定对比损失相结合的新方法进行训练,在各种嵌入基准测试中,其表现优于同规模的其他同类先进模型。
| 特性 | 值 |
|---|---|
| 参数数量 | 677M |
| 支持任务 | retrieval |
| 最大序列长度 | 32768 |
| 嵌入维度 | 1024 |
| Matryoshka 维度 | 32, 64, 128, 256, 512, 768, 1024 |
| 池化策略 | 最后一个 token 的池化 |
| 基础模型 | jinaai/jina-embeddings-v5-text-small |

训练与评估
训练详情和评估结果请参阅我们的技术报告。
使用方法
<details> <summary>环境要求</a></summary>
需要安装以下 Python 包:
transformers>=5.1.0torch>=2.8.0peft>=0.15.2vllm>=0.15.1
可选/推荐
- flash-attention:推荐安装 flash-attention 以提高推理速度和效率,但不是必需的。
- sentence-transformers:如果想通过
sentence-transformers接口使用该模型,也需要安装此包。
</details>
<details open> <summary>通过 <a href="https://www.elastic.co/docs/explore-analyze/elastic-inference/eis">Elastic 推理服务</a></summary>
在生产环境中使用 v5-text 最快的方式。Elastic 推理服务 (EIS) 提供托管式嵌入推理,内置扩展功能,您可以直接在 Elastic 部署中生成嵌入。
PUT _inference/text_embedding/jina-v5
{
"service": "elastic",
"service_settings": {
"model_id": "jina-embeddings-v5-text-small"
}
}
详细设置请参阅 Elastic 推理服务文档。
</details>
<details> <summary>通过 <a href="https://sbert.net/">sentence-transformers</a></summary>
from sentence_transformers import SentenceTransformer
import torch
model = SentenceTransformer(
"jinaai/jina-embeddings-v5-text-small-retrieval",
model_kwargs={"dtype": torch.bfloat16}, # 推荐用于 GPU
config_kwargs={"_attn_implementation": "flash_attention_2"}, # 可选但推荐
)
# 可选:在 encode() 中设置 truncate_dim 来控制嵌入大小
query = "Which planet is known as the Red Planet?"
documents = [
"Venus is often called Earth's twin because of its similar size and proximity.",
"Mars, known for its reddish appearance, is often referred to as the Red Planet.",
"Jupiter, the largest planet in our solar system, has a prominent red spot.",
"Saturn, famous for its rings, is sometimes mistaken for the Red Planet.",
]
# 编码查询和文档
query_embeddings = model.encode(sentences=query, prompt_name="query")
document_embeddings = model.encode(sentences=documents, prompt_name="document")
print(query_embeddings.shape, document_embeddings.shape)
# (1024,) (4, 1024)
similarity = model.similarity(query_embeddings, document_embeddings)
print(similarity)
# tensor([[0.4860, 0.7611, 0.5914, 0.6188]])
</details>
<details> <summary>通过 <a href="https://github.com/vllm-project/vllm">vLLM</a></summary>
from vllm import LLM
from vllm.config.pooler import PoolerConfig
# 初始化模型
name = "jinaai/jina-embeddings-v5-text-small-retrieval"
model = LLM(
model=name,
dtype="float16",
runner="pooling",
pooler_config=PoolerConfig(seq_pooling_type="LAST", normalize=True),
)
# 创建文本提示
query = "Overview of climate change impacts on coastal cities"
query_prompt = f"Query: {query}"
document = "The impacts of climate change on coastal cities are significant.."
document_prompt = f"Document: {document}"
# 编码所有提示
prompts = [query_prompt, document_prompt]
outputs = model.encode(prompts, pooling_task="embed")
</details>
<details> <summary>通过 <a href="https://github.com/huggingface/text-embeddings-inference">Text Embeddings Inference</a></summary>
- 通过 CPU 运行 Docker:
docker run -p 8080:80 \ ghcr.io/huggingface/text-embeddings-inference:cpu-1.9 \ --model-id jinaai/jina-embeddings-v5-text-small-retrieval \ --dtype float32 --pooling last-token - 通过 NVIDIA GPU 运行 Docker(Turing、Ampere、Ada Lovelace、Hopper 或 Blackwell):
docker run --gpus all --shm-size 1g -p 8080:80 \ ghcr.io/huggingface/text-embeddings-inference:cuda-1.9 \ --model-id jinaai/jina-embeddings-v5-text-small-retrieval \ --dtype float16 --pooling last-token
或者,您也可以使用
cargo运行,更多信息请参阅 Text Embeddings Inference 文档。
向 /v1/embeddings 发送请求以通过 OpenAI Embeddings API 生成嵌入:
curl -X POST http://127.0.0.1:8080/v1/embeddings \
-H "Content-Type: application/json" \
-d '{
"model": "jinaai/jina-embeddings-v5-text-small-retrieval",
"input": [
"Query: Overview of climate change impacts on coastal cities",
"Document: The impacts of climate change on coastal cities are significant...",
]
}'
或者通过 Text Embeddings Inference API 规范 来发送请求,以避免手动格式化输入:
curl -X POST http://127.0.0.1:8080/embed \
-H "Content-Type: application/json" \
-d '{
"inputs": "Overview of climate change impacts on coastal cities",
"prompt_name": "query",
}'
</details>
<details> <summary>通过 <a href="https://github.com/ggml-org/llama.cpp">llama.cpp (GGUF)</a></summary> 安装 <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> 后,可以运行 llama-server 来托管嵌入模型作为 OpenAI API 兼容的 HTTP 服务器,使用相应的模型版本:
llama-server -hf jinaai/jina-embeddings-v5-text-small-retrieval:F16 --embedding --pooling last -ub 32768
客户端:
curl -X POST "http://127.0.0.1:8080/v1/embeddings" \
-H "Content-Type: application/json" \
-d '{
"input": [
"Query: A beautiful sunset over the beach",
"Query: Un beau coucher de soleil sur la plage",
"Document: 海滩上美丽的日落",
"Document: 浜辺に沈む美しい夕日",
"Document: Golden sunlight melts into the horizon, painting waves in warm amber and rose, while the sky whispers goodnight to the quiet, endless sea."
]
}'
</details>
<details> <summary>通过 <a href="https://github.com/ggml-org/llama.cpp">llama.cpp (GGUF)</a></summary> 安装 <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> 后,可以运行 llama-server 来托管嵌入模型作为 OpenAI API 兼容的 HTTP 服务器,使用相应的模型版本:
llama-server -hf jinaai/jina-embeddings-v5-text-small-retrieval:F16 --embedding --pooling last -ub 32768
客户端:
curl -X POST "http://127.0.0.1:8080/v1/embeddings" \
-H "Content-Type: application/json" \
-d '{
"input": [
"Query: A beautiful sunset over the beach",
"Query: Un beau coucher de soleil sur la plage",
"Document: 海滩上美丽的日落",
"Document: 浜辺に沈む美しい夕日",
"Document: Golden sunlight melts into the horizon, painting waves in warm amber and rose, while the sky whispers goodnight to the quiet, endless sea."
]
}'
</details>
<details>
<summary>通过 <a href="https://huggingface.co/docs/optimum/index">Optimum (ONNX)</a></summary>
您可以使用 Hugging Face 的 optimum 库在本地运行模型的 ONNX 优化版本。请确保已安装所需依赖(例如 pip install optimum[onnxruntime] transformers torch):
from optimum.onnxruntime import ORTModelForFeatureExtraction
from transformers import AutoTokenizer
import torch
model_id = "jinaai/jina-embeddings-v5-text-small-retrieval"
# 1. 加载分词器和 ONNX 模型
# 我们指定权重所在的子文件夹 'onnx'
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = ORTModelForFeatureExtraction.from_pretrained(
model_id,
subfolder="onnx",
file_name="model.onnx",
provider="CPUExecutionProvider", # 或使用 "CUDAExecutionProvider" 用于 GPU
trust_remote_code=True,
)
# 2. 准备输入
texts = ["Query: How do I use Jina ONNX models?", "Document: Information about semantic matching."]
inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt")
# 4. 推理
with torch.no_grad():
outputs = model(**inputs)
# 5. 池化(对于 Jina-v5 至关重要)
# Jina-v5 使用最后一个 token 的池化。
# 我们取最后一个非填充 token 的隐藏状态。
last_hidden_state = outputs.last_hidden_state
# 找到最后一个 token 的索引(通常是序列的末尾)
sequence_lengths = inputs.attention_mask.sum(dim=1) - 1
embeddings = last_hidden_state[torch.arange(last_hidden_state.size(0)), sequence_lengths]
print('embeddings shape:', embeddings.shape)
print('embeddings:', embeddings)
</details>
许可证
该模型采用 CC BY-NC 4.0 许可证。如需商业使用,请联系我们。
引用
如果您在研究中觉得 jina-embeddings-v5-text-small-retrieval 有用,请引用以下论文:
@misc{akram2026jinaembeddingsv5texttasktargetedembeddingdistillation,
title={jina-embeddings-v5-text: Task-Targeted Embedding Distillation},
author={Mohammad Kalim Akram and Saba Sturua and Nastia Havriushenko and Quentin Herreros and Michael Günther and Maximilian Werk and Han Xiao},
year={2026},
eprint={2602.15547},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2602.15547},
}
jinaai/jina-embeddings-v5-text-small-retrieval
作者 jinaai
创建时间: 2026-02-05 07:41:53+00:00
更新时间: 2026-03-02 16:40:42+00:00
在 Hugging Face 上查看