ONNX 模型库
返回模型

说明文档

基于 Alibaba-NLP/gte-large-en-v1.5 的 SentenceTransformer

这是一个基于 Alibaba-NLP/gte-large-en-v1.5 在 json 数据集上微调的 sentence-transformers 模型。它将句子和段落映射到 1024 维的稠密向量空间,可用于语义文本相似度、语义搜索、 paraphrase 挖掘、文本分类、聚类等任务。

模型详情

模型描述

  • 模型类型: Sentence Transformer
  • 基础模型: Alibaba-NLP/gte-large-en-v1.5 <!-- at revision 104333d6af6f97649377c2afbde10a7704870c7b -->
  • 最大序列长度: 8192 个 token
  • 输出维度: 1024 维
  • 相似度函数: 余弦相似度
  • 训练数据集:
    • json <!-- - 语言: 未知 --> <!-- - 许可证: 未知 -->

模型来源

完整模型架构

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

使用方法

直接使用(Sentence Transformers)

首先安装 Sentence Transformers 库:

pip install -U sentence-transformers

然后你可以加载此模型并运行推理。

from sentence_transformers import SentenceTransformer

# 从 🤗 Hub 下载
model = SentenceTransformer("sentence_transformers_model_id")
# 运行推理
sentences = [
    'What should individuals or organizations provide to ensure that people impacted by an automated system are informed about significant changes in use cases or key functionalities?',
    'use, the individual or organization responsible for the system, and ex\xad\nplanations of outcomes that are clear, timely, and accessible. Such \nnotice should be kept up-to-date and people impacted by the system \nshould be notified of significant use case or key functionality chang\xad\nes. You should know how and why an outcome impacting you was de\xad\ntermined by an automated system, including when the automated',
    'software-algorithms-and-artificial-intelligence; U.S. Department of Justice. Algorithms, Artificial\nIntelligence, and Disability Discrimination in Hiring. May 12, 2022. https://beta.ada.gov/resources/ai\xad\nguidance/\n54. Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Dissecting racial bias in',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# 获取嵌入的相似度分数
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

<!--

直接使用(Transformers)

<details><summary>点击查看 Transformers 中的直接用法</summary>

</details> -->

<!--

下游使用(Sentence Transformers)

你可以在自己的数据集上微调此模型。

<details><summary>点击展开</summary>

</details> -->

<!--

超出范围的使用

列出模型可能被预见到的滥用方式,并说明用户不应该用模型做什么。 -->

评估

指标

信息检索

指标 数值
cosine_accuracy@1 0.8667
cosine_accuracy@3 0.9867
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.8667
cosine_precision@3 0.3289
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.8667
cosine_recall@3 0.9867
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9481
cosine_mrr@10 0.93
cosine_map@100 0.93
dot_accuracy@1 0.8667
dot_accuracy@3 1.0
dot_accuracy@5 1.0
dot_accuracy@10 1.0
dot_precision@1 0.8667
dot_precision@3 0.3333
dot_precision@5 0.2
dot_precision@10 0.1
dot_recall@1 0.8667
dot_recall@3 1.0
dot_recall@5 1.0
dot_recall@10 1.0
dot_ndcg@10 0.949
dot_mrr@10 0.9311
dot_map@100 0.9311

<!--

偏见、风险和局限性

此模型存在哪些已知或可预见的问题?你也可以在此标记已知的失败案例或模型弱点。 -->

<!--

建议

针对可预见的问题有哪些建议?例如,过滤显式内容。 -->

训练详情

训练数据集

json

  • 数据集:json
  • 大小:700 个训练样本
  • 列:<code>anchor</code> 和 <code>positive</code>
  • 基于前 700 个样本的近似统计信息:
    anchor positive
    type string string
    details <ul><li>min: 12 tokens</li><li>mean: 22.12 tokens</li><li>max: 44 tokens</li></ul> <ul><li>min: 11 tokens</li><li>mean: 80.96 tokens</li><li>max: 571 tokens</li></ul>
  • 样本:
    anchor positive
    <code>What is the primary purpose of the AI Bill of Rights outlined in the October 2022 blueprint?</code> <code>BLUEPRINT FOR AN <br>AI BILL OF <br>RIGHTS <br>MAKING AUTOMATED <br>SYSTEMS WORK FOR <br>THE AMERICAN PEOPLE <br>OCTOBER 2022</code>
    <code>What is the purpose of the Blueprint for an AI Bill of Rights published by the White House Office of Science and Technology Policy?</code> <code>About this Document <br>The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was <br>published by the White House Office of Science and Technology Policy in October 2022. This framework was <br>released one year after OSTP announced the launch of a process to develop "a bill of rights for an AI-powered</code>
    <code>What initiative did the OSTP announce a year prior to the release of the framework for a bill of rights for an AI-powered world?</code> <code>released one year after OSTP announced the launch of a process to develop "a bill of rights for an AI-powered <br>world." Its release follows a year of public engagement to inform this initiative. The framework is available <br>online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights <br>About the Office of Science and Technology Policy <br>The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology</code>
  • 损失函数:带有以下参数的 <code>MatryoshkaLoss</code>
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

训练超参数

非默认超参数

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 7
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

所有超参数

<details><summary>点击展开</summary>

  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 7
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

</details>

训练日志

Epoch Step cosine_map@100
0.7273 1 0.8548
1.4545 2 0.8811
2.9091 4 0.9233
3.6364 5 0.9311
4.3636 6 0.93
5.0909 7 0.93
  • 粗体行表示保存的检查点。

框架版本

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.1
  • Tokenizers: 0.19.1

引用

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

<!--

术语表

清晰地定义术语,以便不同受众都能理解。 -->

<!--

模型卡作者

列出创建模型卡的人员,为模型卡详细工作的构建提供认可和责任。 -->

<!--

模型卡联系方式

提供一种方式,让有模型卡更新、建议或问题的人可以联系模型卡作者。 -->

lw2134/policy_gte_large_7

作者 lw2134

sentence-similarity sentence-transformers
↓ 1 ♥ 0

创建时间: 2024-10-04 02:41:14+00:00

更新时间: 2024-10-04 02:42:45+00:00

在 Hugging Face 上查看

文件 (20)

.gitattributes
1_Pooling/config.json
README.md
config.json
config_sentence_transformers.json
model.safetensors
modules.json
onnx/config.json
onnx/configuration.py
onnx/model.onnx ONNX
onnx/special_tokens_map.json
onnx/tokenizer.json
onnx/tokenizer_config.json
onnx/vocab.txt
sentence_bert_config.json
special_tokens_map.json
tokenizer.json
tokenizer_config.json
training_args.bin
vocab.txt