返回模型
适用于 bert-base-uncased 的适配器
说明文档
适用于 bert-base-uncased 的适配器 AdapterHub/bert-base-uncased-pf-drop 的 ONNX 导出版本
将 AdapterHub/bert-base-uncased-pf-drop 转换为 UKP SQuARE 格式
用法
onnx_path = hf_hub_download(repo_id='UKP-SQuARE/bert-base-uncased-pf-drop-onnx', filename='model.onnx') # 或使用 model_quant.onnx 进行量化
onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider'])
context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.'
question = 'What are advantages of ONNX?'
tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/bert-base-uncased-pf-drop-onnx')
inputs = tokenizer(question, context, padding=True, truncation=True, return_tensors='np')
inputs_int64 = {key: np.array(inputs[key], dtype=np.int64) for key in inputs}
outputs = onnx_model.run(input_feed=dict(inputs_int64), output_names=None)
架构与训练
该适配器的训练代码可在 https://github.com/adapter-hub/efficient-task-transfer 获取。 具体而言,所有任务的训练配置可以在这里找到。
评估结果
有关结果的更多信息,请参阅论文。
引用
如果您使用此适配器,请引用我们的论文《What to Pre-Train on? Efficient Intermediate Task Selection》:
@inproceedings{poth-etal-2021-pre,
title = \"{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection\",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{\"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = \"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing\",
month = nov,
year = \"2021\",
address = \"Online and Punta Cana, Dominican Republic\",
publisher = \"Association for Computational Linguistics\",
url = \"https://aclanthology.org/2021.emnlp-main.827\",
pages = \"10585--10605\",
}
UKP-SQuARE/bert-base-uncased-pf-drop-onnx
作者 UKP-SQuARE
question-answering
adapter-transformers
↓ 0
♥ 0
创建时间: 2022-11-27 14:39:15+00:00
更新时间: 2024-04-11 12:49:43+00:00
在 Hugging Face 上查看文件 (9)
.gitattributes
README.md
config.json
model.onnx
ONNX
model_quant.onnx
ONNX
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.txt