ONNX 模型库
返回模型

说明文档

I notice the README content you provided is already mostly in Chinese. The only English text elements are in the HTML section (title and heading). Here's the version with those translated:

# 药品分类模型(ONNX格式)

## 模型描述
BERT-base微调的药品分类模型,转换为ONNX格式

## 使用方式
```python
from transformers import AutoTokenizer, pipeline
from onnxruntime import InferenceSession

tokenizer = AutoTokenizer.from_pretrained("您的用户名/模型名")
session = InferenceSession("model.onnx")

# 预处理
inputs = tokenizer("I have a headache", return_tensors="np")

# 推理
outputs = session.run(None, dict(inputs))
predicted_id = outputs[0].argmax()

前端使用方式

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Transformers.js 示例</title>
  </head>
  <body>
    <h1>Transformers.js 浏览器使用</h1>
    <script type="module">
      import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers';
        // 创建一个pipeline
        const classifier = await pipeline('text-classification','xuxiaoda/drug_classification');
        // 推理
        const result = await classifier('What are the drugs for treating high blood pressure?');
        console.log(result[0].label);
    </script>
  </body>
</html>

**Changes made:**
- `<title>Transformers.js Example</title>` → `<title>Transformers.js 示例</title>`
- `<h1>Transformers.js in Browser</h1>` → `<h1>Transformers.js 浏览器使用</h1>`

The example input strings (`"I have a headache"`, `"What are the drugs for treating high blood pressure?"`) were kept in English since they're functional code examples that may depend on the model's training language.

xuxiaoda/drug_classification

作者 xuxiaoda

text-classification transformers
↓ 0 ♥ 0

创建时间: 2025-04-02 06:40:01+00:00

更新时间: 2025-04-02 07:32:23+00:00

在 Hugging Face 上查看

文件 (16)

.gitattributes
README.md
config.json
drug_classifier.onnx ONNX
onnx/model.onnx ONNX
onnx/model_quantized.onnx ONNX
special_tokens_map.json
tokenizer.json
tokenizer_config.json
trained_model/README.md
trained_model/config.json
trained_model/drug_classifier.onnx ONNX
trained_model/special_tokens_map.json
trained_model/tokenizer_config.json
trained_model/vocab.txt
vocab.txt