返回模型
说明文档
MS Marco 交叉编码器
该模型基于 MS Marco Passage Ranking 任务进行训练。
该模型可用于信息检索:给定一个查询,将所有可能的段落进行编码(例如使用 ElasticSearch 检索)。然后按降序对段落进行排序。更多详情请参阅 SBERT.net Retrieve & Re-rank。训练代码可在此处获取:SBERT.net Training MS Marco
使用 SentenceTransformers
安装 SentenceTransformers 后,使用预训练模型非常简单:
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/ms-marco-MiniLM-L12-v2')
scores = model.predict([
("How many people live in Berlin?", "Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers."),
("How many people live in Berlin?", "Berlin is well known for its museums."),
])
print(scores)
# [ 9.218911 -4.0780287]
使用 Transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/ms-marco-MiniLM-L12-v2')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/ms-marco-MiniLM-L12-v2')
features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print(scores)
性能
下表展示了各种预训练交叉编码器在 TREC Deep Learning 2019 和 MS Marco Passage Reranking 数据集上的性能。
| 模型名称 | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | 文档/秒 |
|---|---|---|---|
| 版本 2 模型 | |||
| cross-encoder/ms-marco-TinyBERT-L2-v2 | 69.84 | 32.56 | 9000 |
| cross-encoder/ms-marco-MiniLM-L2-v2 | 71.01 | 34.85 | 4100 |
| cross-encoder/ms-marco-MiniLM-L4-v2 | 73.04 | 37.70 | 2500 |
| cross-encoder/ms-marco-MiniLM-L6-v2 | 74.30 | 39.01 | 1800 |
| cross-encoder/ms-marco-MiniLM-L12-v2 | 74.31 | 39.02 | 960 |
| 版本 1 模型 | |||
| cross-encoder/ms-marco-TinyBERT-L2 | 67.43 | 30.15 | 9000 |
| cross-encoder/ms-marco-TinyBERT-L4 | 68.09 | 34.50 | 2900 |
| cross-encoder/ms-marco-TinyBERT-L6 | 69.57 | 36.13 | 680 |
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 |
| 其他模型 | |||
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 |
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 |
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 |
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 |
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 |
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 |
注:运行时间基于 V100 GPU 计算。
cross-encoder/ms-marco-MiniLM-L12-v2
作者 cross-encoder
text-ranking
sentence-transformers
↓ 1.4M
♥ 102
创建时间: 2022-03-02 23:29:05+00:00
更新时间: 2025-08-29 14:37:04+00:00
在 Hugging Face 上查看文件 (23)
.gitattributes
README.md
config.json
flax_model.msgpack
model.safetensors
onnx/model.onnx
ONNX
onnx/model_O1.onnx
ONNX
onnx/model_O2.onnx
ONNX
onnx/model_O3.onnx
ONNX
onnx/model_O4.onnx
ONNX
onnx/model_qint8_arm64.onnx
ONNX
onnx/model_qint8_avx512.onnx
ONNX
onnx/model_qint8_avx512_vnni.onnx
ONNX
onnx/model_quint8_avx2.onnx
ONNX
openvino/openvino_model.bin
openvino/openvino_model.xml
openvino/openvino_model_qint8_quantized.bin
openvino/openvino_model_qint8_quantized.xml
pytorch_model.bin
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.txt