返回模型
适用于 bert-base-uncased 的适配器
说明文档
适用于 bert-base-uncased 的适配器 AdapterHub/bert-base-uncased-pf-squad 的 ONNX 导出版本
为 UKP SQuARE 转换的 AdapterHub/bert-base-uncased-pf-squad
使用方法
onnx_path = hf_hub_download(repo_id='UKP-SQuARE/bert-base-uncased-pf-squad-onnx', filename='model.onnx') # or model_quant.onnx for quantization
onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider'])
context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.'
question = 'What are advantages of ONNX?'
tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/bert-base-uncased-pf-squad-onnx')
inputs = tokenizer(question, context, padding=True, truncation=True, return_tensors='np')
inputs_int64 = {key: np.array(inputs[key], dtype=np.int64) for key in inputs}
outputs = onnx_model.run(input_feed=dict(inputs_int64), output_names=None)
架构与训练
该适配器的训练代码可在 https://github.com/adapter-hub/efficient-task-transfer 获取。 具体来说,所有任务的训练配置可以在这里找到。
评估结果
有关结果的更多信息,请参考论文。
引用
如果您使用此适配器,请引用我们的论文"What to Pre-Train on? Efficient Intermediate Task Selection":
@inproceedings{poth-etal-2021-pre,
title = "{W}hat to Pre-Train on? {E}fficient Intermediate Task Selection",
author = {Poth, Clifton and
Pfeiffer, Jonas and
R{\"u}ckl{'e}, Andreas and
Gurevych, Iryna},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.827",
pages = "10585--10605",
}
UKP-SQuARE/bert-base-uncased-pf-squad-onnx
作者 UKP-SQuARE
question-answering
adapter-transformers
↓ 0
♥ 0
创建时间: 2022-11-28 21:12:10+00:00
更新时间: 2022-12-31 13:32:03+00:00
在 Hugging Face 上查看文件 (9)
.gitattributes
README.md
config.json
model.onnx
ONNX
model_quant.onnx
ONNX
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.txt