ONNX 模型库
返回模型

说明文档

jobbert_knowledge_extraction (ONNX)

这是 jjzha/jobbert_knowledge_extraction 的 ONNX 版本。它是使用 这个 Hugging Face Space 自动转换并上传的。

使用 Transformers.js

请参阅 token-classification 的流水线文档:https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TokenClassificationPipeline


这是一个使用以下模型的演示:

@inproceedings{zhang-etal-2022-skillspan,
    title = \"{S}kill{S}pan: Hard and Soft Skill Extraction from {E}nglish Job Postings\",
    author = \"Zhang, Mike  and
      Jensen, Kristian  and
      Sonniks, Sif  and
      Plank, Barbara\",
    booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",
    month = jul,
    year = \"2022\",
    address = \"Seattle, United States\",
    publisher = \"Association for Computational Linguistics\",
    url = \"https://aclanthology.org/2022.naacl-main.366\",
    doi = \"10.18653/v1/2022.naacl-main.366\",
    pages = \"4962--4984\",
    abstract = \"Skill Extraction (SE) is an important and widely-studied task useful to gain insights into labor market dynamics. However, there is a lacuna of datasets and annotation guidelines; available datasets are few and contain crowd-sourced labels on the span-level or labels from a predefined skill inventory. To address this gap, we introduce SKILLSPAN, a novel SE dataset consisting of 14.5K sentences and over 12.5K annotated spans. We release its respective guidelines created over three different sources annotated for hard and soft skills by domain experts. We introduce a BERT baseline (Devlin et al., 2019). To improve upon this baseline, we experiment with language models that are optimized for long spans (Joshi et al., 2020; Beltagy et al., 2020), continuous pre-training on the job posting domain (Han and Eisenstein, 2019; Gururangan et al., 2020), and multi-task learning (Caruana, 1997). Our results show that the domain-adapted models significantly outperform their non-adapted counterparts, and single-task outperforms multi-task learning.\",
}

请注意,还有另一个端点,即 jjzha/jobbert_skill_extraction。 知识可以被视为硬技能,而技能则包括软技能和应用技能。

onnx-community/jobbert_knowledge_extraction-ONNX

作者 onnx-community

token-classification transformers.js
↓ 0 ♥ 0

创建时间: 2025-11-16 07:33:48+00:00

更新时间: 2025-11-16 07:33:57+00:00

在 Hugging Face 上查看

文件 (16)

.gitattributes
README.md
config.json
onnx/model.onnx ONNX
onnx/model_bnb4.onnx ONNX
onnx/model_fp16.onnx ONNX
onnx/model_int8.onnx ONNX
onnx/model_q4.onnx ONNX
onnx/model_q4f16.onnx ONNX
onnx/model_quantized.onnx ONNX
onnx/model_uint8.onnx ONNX
quantize_config.json
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.txt