返回模型
说明文档
sentence-transformers/msmarco-distilbert-base-tas-b
这是将 DistilBert TAS-B 模型 移植到 sentence-transformers 模型:它将句子和段落映射到 768 维的稠密向量空间,并针对语义搜索任务进行了优化。
使用方法 (Sentence-Transformers)
安装 sentence-transformers 后,使用此模型变得非常简单:
pip install -U sentence-transformers
然后你可以这样使用模型:
from sentence_transformers import SentenceTransformer, util
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
#Load the model
model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-tas-b')
#Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)
#Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
使用方法 (HuggingFace Transformers)
如果没有使用 sentence-transformers,你可以这样使用模型:首先将输入通过 transformer 模型,然后需要对上下文词嵌入应用正确的池化操作。
from transformers import AutoTokenizer, AutoModel
import torch
#CLS Pooling - Take output from first token
def cls_pooling(model_output):
return model_output.last_hidden_state[:,0]
#Encode text
def encode(texts):
# Tokenize sentences
encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input, return_dict=True)
# Perform pooling
embeddings = cls_pooling(model_output)
return embeddings
# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-distilbert-base-tas-b")
model = AutoModel.from_pretrained("sentence-transformers/msmarco-distilbert-base-tas-b")
#Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)
#Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
#Combine docs & scores
doc_score_pairs = list(zip(docs, scores))
#Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
for doc, score in doc_score_pairs:
print(score, doc)
完整模型架构
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
引用与作者
sentence-transformers/msmarco-distilbert-base-tas-b
作者 sentence-transformers
sentence-similarity
sentence-transformers
↓ 76.7K
♥ 43
创建时间: 2022-03-02 23:29:05+00:00
更新时间: 2025-03-06 13:31:42+00:00
在 Hugging Face 上查看文件 (27)
.gitattributes
1_Pooling/config.json
README.md
config.json
config_sentence_transformers.json
model.safetensors
modules.json
onnx/model.onnx
ONNX
onnx/model_O1.onnx
ONNX
onnx/model_O2.onnx
ONNX
onnx/model_O3.onnx
ONNX
onnx/model_O4.onnx
ONNX
onnx/model_qint8_arm64.onnx
ONNX
onnx/model_qint8_avx512.onnx
ONNX
onnx/model_qint8_avx512_vnni.onnx
ONNX
onnx/model_quint8_avx2.onnx
ONNX
openvino/openvino_model.bin
openvino/openvino_model.xml
openvino/openvino_model_qint8_quantized.bin
openvino/openvino_model_qint8_quantized.xml
pytorch_model.bin
sentence_bert_config.json
special_tokens_map.json
tf_model.h5
tokenizer.json
tokenizer_config.json
vocab.txt