返回模型
说明文档
Xenova/bge-m3 模型的 README,包含与 Transformers.js 兼容的 ONNX 权重。
使用方法 (Transformers.js)
如果还没有安装,可以从 NPM 安装 Transformers.js JavaScript 库:
npm i @huggingface/transformers
然后可以按如下方式使用模型计算嵌入向量:
import { pipeline } from '@huggingface/transformers';
// 创建特征提取管道
const extractor = await pipeline('feature-extraction', 'Xenova/bge-m3');
// 计算句子嵌入向量
const texts = ["What is BGE M3?", "Defination of BM25"]
const embeddings = await extractor(texts, { pooling: 'cls', normalize: true });
console.log(embeddings);
// Tensor {
// dims: [ 2, 1024 ],
// type: 'float32',
// data: Float32Array(2048) [ -0.0340719036757946, -0.04478546231985092, ... ],
// size: 2048
// }
console.log(embeddings.tolist()); // 将嵌入向量转换为 JavaScript 列表
// [
// [ -0.0340719036757946, -0.04478546231985092, -0.004497686866670847, ... ],
// [ -0.015383965335786343, -0.041989751160144806, -0.025820579379796982, ... ]
// ]
你也可以将该模型用于检索。例如:
import { pipeline, cos_sim } from '@huggingface/transformers';
// 创建特征提取管道
const extractor = await pipeline('feature-extraction', 'Xenova/bge-m3');
// 定义用于检索的查询
const query = 'What is BGE M3?';
// 要嵌入的文档列表
const texts = [
'BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.',
'BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document',
];
// 计算句子嵌入向量
const embeddings = await extractor(texts, { pooling: 'cls', normalize: true });
// 计算查询嵌入向量
const query_embeddings = await extractor(query, { pooling: 'cls', normalize: true });
// 按余弦相似度分数排序
const scores = embeddings.tolist().map(
(embedding, i) => ({
id: i,
score: cos_sim(query_embeddings.data, embedding),
text: texts[i],
})
).sort((a, b) => b.score - a.score);
console.log(scores);
// [
// { id: 0, score: 0.62532672968664, text: 'BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.' },
// { id: 1, score: 0.33111060648806, text: 'BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document' },
// ]
注意:为 ONNX 权重创建单独的仓库是一个临时解决方案,直到 WebML 获得更多关注。如果你希望使模型能够在网上使用,我们建议使用 🤗 Optimum 转换为 ONNX 格式,并像这个仓库一样组织(ONNX 权重放在名为 onnx 的子文件夹中)。
Xenova/bge-m3
作者 Xenova
feature-extraction
transformers.js
↓ 30.1K
♥ 46
创建时间: 2024-02-01 17:02:11+00:00
更新时间: 2026-02-10 19:47:18+00:00
在 Hugging Face 上查看文件 (26)
.gitattributes
README.md
config.json
onnx/model.onnx
ONNX
onnx/model.onnx_data
onnx/model_bnb4.onnx
ONNX
onnx/model_fp16.onnx
ONNX
onnx/model_int8.onnx
ONNX
onnx/model_q4.onnx
ONNX
onnx/model_q4f16.onnx
ONNX
onnx/model_quantized.onnx
ONNX
onnx/model_uint8.onnx
ONNX
onnx/sentence_transformers.onnx
ONNX
onnx/sentence_transformers.onnx_data
onnx/sentence_transformers_bnb4.onnx
ONNX
onnx/sentence_transformers_fp16.onnx
ONNX
onnx/sentence_transformers_int8.onnx
ONNX
onnx/sentence_transformers_q4.onnx
ONNX
onnx/sentence_transformers_q4f16.onnx
ONNX
onnx/sentence_transformers_quantized.onnx
ONNX
onnx/sentence_transformers_uint8.onnx
ONNX
quantize_config.json
sentencepiece.bpe.model
special_tokens_map.json
tokenizer.json
tokenizer_config.json