ONNX 模型库
返回模型

说明文档

Whisper

兼容 Transformers.jsopenai/whisper-tiny.en ONNX 权重模型。

使用方法 (Transformers.js)

如果尚未安装,您可以使用以下命令从 NPM 安装 Transformers.js JavaScript 库:

npm i @huggingface/transformers

示例: 转录英语。

import { pipeline } from '@huggingface/transformers';

// Create speech recognition pipeline
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en');

// Transcribe audio from URL
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';
const output = await transcriber(url);
// { text: \" And so my fellow Americans ask not what your country can do for you, ask what you can do for your country.\" }

示例: 带时间戳的英语转录。

import { pipeline } from '@huggingface/transformers';

// Create speech recognition pipeline
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en');

// Transcribe audio from URL with timestamps
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';
const output = await transcriber(url, { return_timestamps: true });
// {
//   text: \" And so my fellow Americans ask not what your country can do for you, ask what you can do for your country.\"
//   chunks: [
//     { timestamp: [0, 8],  text: \" And so my fellow Americans ask not what your country can do for you\" }
//     { timestamp: [8, 11], text: \" ask what you can do for your country.\" }
//   ]
// }

示例: 带词级时间戳的英语转录。

import { pipeline } from '@huggingface/transformers';

// Create speech recognition pipeline
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en');

// Transcribe audio from URL with word-level timestamps
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';
const output = await transcriber(url, { return_timestamps: 'word' });
// {
//   \"text\": \" And so my fellow Americans ask not what your country can do for you ask what you can do for your country.\",
//   \"chunks\": [
//     { \"text\": \" And\", \"timestamp\": [0, 0.78] },
//     { \"text\": \" so\", \"timestamp\": [0.78, 1.06] },
//     { \"text\": \" my\", \"timestamp\": [1.06, 1.46] },
//     ...
//     { \"text\": \" for\", \"timestamp\": [9.72, 9.92] },
//     { \"text\": \" your\", \"timestamp\": [9.92, 10.22] },
//     { \"text\": \" country.\", \"timestamp\": [10.22, 13.5] }
//   ]
// }

注意: 为 ONNX 权重单独创建一个仓库是一个临时解决方案,直到 WebML 获得更多关注。如果您想让您的模型可以在 Web 上使用,我们建议使用 🤗 Optimum 转换为 ONNX 格式,并像此仓库一样组织(ONNX 权重位于名为 onnx 的子文件夹中)。

Xenova/whisper-tiny.en

作者 Xenova

automatic-speech-recognition transformers.js
↓ 61.6K ♥ 23

创建时间: 2023-05-02 21:37:47+00:00

更新时间: 2025-12-16 18:46:53+00:00

在 Hugging Face 上查看

文件 (45)

.gitattributes
README.md
added_tokens.json
config.json
generation_config.json
merges.txt
normalizer.json
onnx/decoder_model.onnx ONNX
onnx/decoder_model_bnb4.onnx ONNX
onnx/decoder_model_fp16.onnx ONNX
onnx/decoder_model_int8.onnx ONNX
onnx/decoder_model_merged.onnx ONNX
onnx/decoder_model_merged_bnb4.onnx ONNX
onnx/decoder_model_merged_fp16.onnx ONNX
onnx/decoder_model_merged_int8.onnx ONNX
onnx/decoder_model_merged_q4.onnx ONNX
onnx/decoder_model_merged_q4f16.onnx ONNX
onnx/decoder_model_merged_quantized.onnx ONNX
onnx/decoder_model_merged_uint8.onnx ONNX
onnx/decoder_model_q4.onnx ONNX
onnx/decoder_model_q4f16.onnx ONNX
onnx/decoder_model_quantized.onnx ONNX
onnx/decoder_model_uint8.onnx ONNX
onnx/decoder_with_past_model.onnx ONNX
onnx/decoder_with_past_model_bnb4.onnx ONNX
onnx/decoder_with_past_model_fp16.onnx ONNX
onnx/decoder_with_past_model_int8.onnx ONNX
onnx/decoder_with_past_model_q4.onnx ONNX
onnx/decoder_with_past_model_q4f16.onnx ONNX
onnx/decoder_with_past_model_quantized.onnx ONNX
onnx/decoder_with_past_model_uint8.onnx ONNX
onnx/encoder_model.onnx ONNX
onnx/encoder_model_bnb4.onnx ONNX
onnx/encoder_model_fp16.onnx ONNX
onnx/encoder_model_q4.onnx ONNX
onnx/encoder_model_q4f16.onnx ONNX
onnx/encoder_model_quantized.onnx ONNX
onnx/encoder_model_uint8.onnx ONNX
preprocessor_config.json
quant_config.json
quantize_config.json
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.json