ONNX 模型库
返回模型

说明文档

sentence-transformers/stsb-xlm-r-multilingual

这是一个 sentence-transformers 模型:它将句子和段落映射到 768 维的稠密向量空间,可用于聚类或语义搜索等任务。

使用方法 (Sentence-Transformers)

安装 sentence-transformers 后,使用此模型非常简单:

pip install -U sentence-transformers

然后你可以这样使用该模型:

from sentence_transformers import SentenceTransformer

model = SentenceTransformer(
    "jovemexausto/stsb-xlm-r-multilingual-onnx-avx2",
    backend="onnx",
    model_kwargs={"file_name": "model_qint8_avx2.onnx"},
)

# 2. 推理正常进行
embeddings = model.encode(["The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium."])
similarities = model.similarity(embeddings, embeddings)

使用方法 (HuggingFace Transformers)

如果没有安装 sentence-transformers,你可以这样使用模型:首先,将输入通过 transformer 模型,然后在上下文化的词嵌入之上应用正确的池化操作。

from transformers import AutoTokenizer, AutoModel
import torch


# 平均池化 - 考虑注意力掩码以正确计算平均值
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] # model_output 的第一个元素包含所有词嵌入
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


# 我们想要获取句子嵌入的句子
sentences = ['This is an example sentence', 'Each sentence is converted']

# 从 HuggingFace Hub 加载模型
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/stsb-xlm-r-multilingual')
model = AutoModel.from_pretrained('sentence-transformers/stsb-xlm-r-multilingual')

# 对句子进行分词
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# 计算词嵌入
with torch.no_grad():
    model_output = model(**encoded_input)

# 执行池化。这里使用最大池化。
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

print("Sentence embeddings:")
print(sentence_embeddings)

完整模型架构

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)

引用与作者

该模型由 sentence-transformers 训练。

如果你觉得这个模型有帮助,欢迎引用我们的论文 Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "http://arxiv.org/abs/1908.10084",
}

jovemexausto/stsb-xlm-r-multilingual-onnx-avx2

作者 jovemexausto

sentence-similarity sentence-transformers
↓ 0 ♥ 0

创建时间: 2025-06-09 02:38:38+00:00

更新时间: 2025-06-09 03:03:19+00:00

在 Hugging Face 上查看

文件 (12)

.gitattributes
1_Pooling/config.json
README.md
config.json
config_sentence_transformers.json
modules.json
onnx/model.onnx ONNX
sentence_bert_config.json
sentencepiece.bpe.model
special_tokens_map.json
tokenizer.json
tokenizer_config.json