ONNX 模型库
返回模型

说明文档

MS Marco 跨编码器

本模型基于 MS Marco Passage Ranking 任务进行训练。

该模型可用于信息检索:给定一个查询,将查询与所有可能的段落进行编码(例如使用 ElasticSearch 检索)。然后按降序排列段落。更多详情请参阅 SBERT.net 检索与重排序。训练代码在此处:SBERT.net MS Marco 训练

与 SentenceTransformers 配合使用

安装 SentenceTransformers 后,使用非常简单:

from sentence_transformers import CrossEncoder

model = CrossEncoder('cross-encoder/ms-marco-MiniLM-L4-v2')
scores = model.predict([
    ("How many people live in Berlin?", "Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers."),
    ("How many people live in Berlin?", "Berlin is well known for its museums."),
])
print(scores)
# [ 9.1273365 -4.569759 ]

与 Transformers 配合使用

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/ms-marco-MiniLM-L4-v2')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/ms-marco-MiniLM-L4-v2')

features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'],  padding=True, truncation=True, return_tensors="pt")

model.eval()
with torch.no_grad():
    scores = model(**features).logits
    print(scores)

性能

下表展示了各种预训练跨编码器在 TREC Deep Learning 2019MS Marco Passage Reranking 数据集上的性能表现。

模型名称 NDCG@10 (TREC DL 19) MRR@10 (MS Marco Dev) Docs / Sec
版本 2 模型
cross-encoder/ms-marco-TinyBERT-L2-v2 69.84 32.56 9000
cross-encoder/ms-marco-MiniLM-L2-v2 71.01 34.85 4100
cross-encoder/ms-marco-MiniLM-L4-v2 73.04 37.70 2500
cross-encoder/ms-marco-MiniLM-L6-v2 74.30 39.01 1800
cross-encoder/ms-marco-MiniLM-L12-v2 74.31 39.02 960
版本 1 模型
cross-encoder/ms-marco-TinyBERT-L2 67.43 30.15 9000
cross-encoder/ms-marco-TinyBERT-L4 68.09 34.50 2900
cross-encoder/ms-marco-TinyBERT-L6 69.57 36.13 680
cross-encoder/ms-marco-electra-base 71.99 36.41 340
其他模型
nboost/pt-tinybert-msmarco 63.63 28.80 2900
nboost/pt-bert-base-uncased-msmarco 70.94 34.75 340
nboost/pt-bert-large-msmarco 73.36 36.48 100
Capreolus/electra-base-msmarco 71.23 36.89 340
amberoad/bert-multilingual-passage-reranking-msmarco 68.40 35.54 330
sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco 72.82 37.88 720

注:运行时间基于 V100 GPU 计算。

cross-encoder/ms-marco-MiniLM-L4-v2

作者 cross-encoder

text-ranking sentence-transformers
↓ 1.9M ♥ 14

创建时间: 2022-03-02 23:29:05+00:00

更新时间: 2025-08-29 14:36:22+00:00

在 Hugging Face 上查看

文件 (23)

.gitattributes
README.md
config.json
flax_model.msgpack
model.safetensors
onnx/model.onnx ONNX
onnx/model_O1.onnx ONNX
onnx/model_O2.onnx ONNX
onnx/model_O3.onnx ONNX
onnx/model_O4.onnx ONNX
onnx/model_qint8_arm64.onnx ONNX
onnx/model_qint8_avx512.onnx ONNX
onnx/model_qint8_avx512_vnni.onnx ONNX
onnx/model_quint8_avx2.onnx ONNX
openvino/openvino_model.bin
openvino/openvino_model.xml
openvino/openvino_model_qint8_quantized.bin
openvino/openvino_model_qint8_quantized.xml
pytorch_model.bin
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.txt