ONNX 模型库
返回模型

说明文档

这是 gte-multilingual-base 模型的 ONNX 版本。

此示例改编自原始模型仓库,适用于 ONNX 版本。

# Requires transformers>=4.36.0
import onnxruntime as ort
import numpy as np
from transformers import AutoTokenizer
input_texts = [
    \"what is the capital of China?\",
    \"how to implement quick sort in python?\",
    \"北京\",
    \"快排算法介绍\"
]
# Load the tokenizer (using the original model for tokenizer)
tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-multilingual-base')
# Load the ONNX model
session = ort.InferenceSession(\"model.onnx\")
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=8192, padding=True, truncation=True, return_tensors='np')
# Run inference
outputs = session.run(None, {
    \"input_ids\": batch_dict[\"input_ids\"],
    \"attention_mask\": batch_dict[\"attention_mask\"]
})
# Get embeddings from the second output (last hidden states)
# Extract the [CLS] token embedding (first token) for each sequence
last_hidden_states = outputs[1]  # Shape: (batch_size, seq_len, hidden_size)
dimension = 768  # The output dimension of the output embedding, should be in [128, 768]
embeddings = last_hidden_states[:, 0, :dimension]  # Shape: (batch_size, dimension)
# Debug: Check embeddings
print(f\"Embeddings shape: {embeddings.shape}\")
print(f\"First few values of first embedding: {embeddings[0][:5]}\")
print(f\"First few values of second embedding: {embeddings[1][:5]}\")
# Normalize embeddings
embeddings = embeddings / np.linalg.norm(embeddings, axis=1, keepdims=True)
# Calculate similarity scores
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
# [[0.3016996383666992, 0.7503870129585266, 0.3203084468841553]]

LIBRAAITECH/gte-multilingual-base-onnx

作者 LIBRAAITECH

text-generation
↓ 1 ♥ 0

创建时间: 2025-06-30 13:31:08+00:00

更新时间: 2025-10-21 10:19:35+00:00

在 Hugging Face 上查看

文件 (7)

.gitattributes
README.md
config.json
model.onnx ONNX
special_tokens_map.json
tokenizer.json
tokenizer_config.json