ONNX 模型库
返回模型

说明文档

该模型是基于 bert-base pruneofa 90% 稀疏 在 Squadv1 数据集上进行迁移学习的版本。

  eval_exact_match = 80.2933
  eval_f1          = 87.6788
  eval_samples     =   10784

训练

使用 https://github.com/IntelLabs/Model-Compression-Research-Package.git 参见 pruneofa-transfer-learning.sh

评估

export CUDA_VISIBLE_DEVICES=0

OUTDIR=eval-bert-base-squadv1-pruneofa-90pc-bt
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR

nohup python run_qa.py  \
    --model_name_or_path vuiseng9/bert-base-squadv1-pruneofa-90pc-bt  \
    --dataset_name squad  \
    --do_eval  \
    --per_device_eval_batch_size 128  \
    --max_seq_length 384  \
    --doc_stride 128  \
    --overwrite_output_dir \
    --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &

vuiseng9/bert-base-squadv1-pruneofa-90pc-bt

作者 vuiseng9

question-answering transformers
↓ 1 ♥ 0

创建时间: 2022-03-02 23:29:05+00:00

更新时间: 2022-01-18 19:13:21+00:00

在 Hugging Face 上查看

文件 (39)

.gitattributes
README.md
all_results.json
args.bin
bert-base-squadv1-pruneofa-90pc-bt.onnx ONNX
checkpoint-56750/config.json
checkpoint-56750/optimizer.pt
checkpoint-56750/pruneofa_lt_pytorch_model.bin
checkpoint-56750/pytorch_model.bin
checkpoint-56750/rng_state.pth
checkpoint-56750/scheduler.pt
checkpoint-56750/sparsified_pytorch_model.bin
checkpoint-56750/special_tokens_map.json
checkpoint-56750/tokenizer.json
checkpoint-56750/tokenizer_config.json
checkpoint-56750/trainer_state.json
checkpoint-56750/training_args.bin
checkpoint-56750/vocab.txt
config.json
eval_nbest_predictions.json
eval_predictions.json
eval_results.json
final_pytorch_model.bin
layer_wise_sparsity_global_rate_70.20.csv
layer_wise_sparsity_global_rate_70.20.md
linear_layer_sparsity_85M_params_90.00_sparsity.csv
linear_layer_sparsity_85M_params_90.00_sparsity.md
pruneofa-transfer-learning.sh
pruning_config.json
pytorch_model.bin
raw_pytorch_model.bin
run.log
special_tokens_map.json
tokenizer.json
tokenizer_config.json
train_results.json
trainer_state.json
training_args.bin
vocab.txt