返回模型
说明文档
SentenceTransformer 基于 Alibaba-NLP/gte-large-en-v1.5
这是一个基于 Alibaba-NLP/gte-large-en-v1.5 微调的 sentence-transformers 模型。它将句子和段落映射到1024维的稠密向量空间,可用于语义文本相似度、语义搜索、复述挖掘、文本分类、聚类等任务。
模型详情
模型描述
- 模型类型: Sentence Transformer
- 基础模型: Alibaba-NLP/gte-large-en-v1.5 <!-- at revision 104333d6af6f97649377c2afbde10a7704870c7b -->
- 最大序列长度: 8192 tokens
- 输出维度: 1024 tokens
- 相似度函数: 余弦相似度 <!-- - 训练数据集: 未知 --> <!-- - 语言: 未知 --> <!-- - 许可证: 未知 -->
模型来源
- 文档: Sentence Transformers 文档
- 仓库: Sentence Transformers GitHub
- Hugging Face: Sentence Transformers on Hugging Face
完整模型架构
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
使用方法
直接使用 (Sentence Transformers)
首先安装 Sentence Transformers 库:
pip install -U sentence-transformers
然后可以加载此模型并运行推理。
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("lw2134/policy_gte_large")
# Run inference
sentences = [
'1. What measures should be taken to ensure the accuracy and timeliness of data? \n2. Why is it important to limit access to sensitive data and derived data?',
'maintain accurate, timely, and complete data. \nLimit access to sensitive data and derived data. Sensitive data and derived data should not be sold, \nshared, or made public as part of data brokerage or other agreements. Sensitive data includes data that can be \nused to infer sensitive information; even systems that are not directly marketed as sensitive domain technologies \nare expected to keep sensitive data private. Access to such data should be limited based on necessity and based',
'comply with the Privacy Act's requirements. Among other things, a court may order a federal agency to amend or \ncorrect an individual's information in its records or award monetary damages if an inaccurate, irrelevant, untimely, \nor incomplete record results in an adverse determination about an individual's "qualifications, character, rights, … \nopportunities…, or benefits." \nNIST's Privacy Framework provides a comprehensive, detailed and actionable approach for',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
<!--
直接使用 (Transformers)
<details><summary>点击查看 Transformers 中的直接使用方法</summary>
</details> -->
<!--
下游使用 (Sentence Transformers)
您可以在自己的数据集上微调此模型。
<details><summary>点击展开</summary>
</details> -->
<!--
超出范围的使用
列出模型可能被预见性地滥用的方式,并说明用户不应该用该模型做什么。 -->
评估
指标
信息检索
| 指标 | 值 |
|---|---|
| cosine_accuracy@1 | 0.9733 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9733 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9733 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9902 |
| cosine_mrr@10 | 0.9867 |
| cosine_map@100 | 0.9867 |
| dot_accuracy@1 | 0.9733 |
| dot_accuracy@3 | 1.0 |
| dot_accuracy@5 | 1.0 |
| dot_accuracy@10 | 1.0 |
| dot_precision@1 | 0.9733 |
| dot_precision@3 | 0.3333 |
| dot_precision@5 | 0.2 |
| dot_precision@10 | 0.1 |
| dot_recall@1 | 0.9733 |
| dot_recall@3 | 1.0 |
| dot_recall@5 | 1.0 |
| dot_recall@10 | 1.0 |
| dot_ndcg@10 | 0.9902 |
| dot_mrr@10 | 0.9867 |
| dot_map@100 | 0.9867 |
<!--
偏见、风险和局限性
该模型存在哪些已知或可预见的问题?您也可以在此标记已知的失败案例或模型的弱点。 -->
<!--
建议
针对可预见的问题有什么建议?例如,过滤显式内容。 -->
训练详情
训练数据集
未命名数据集
- 大小:500个训练样本
- 列:<code>sentence_0</code> 和 <code>sentence_1</code>
- 基于前500个样本的近似统计:
sentence_0 sentence_1 类型 字符串 字符串 详情 <ul><li>最小值:27 tokens</li><li>平均值:40.71 tokens</li><li>最大值:62 tokens</li></ul> <ul><li>最小值:11 tokens</li><li>平均值:78.92 tokens</li><li>最大值:104 tokens</li></ul> - 样本:
sentence_0 sentence_1 <code>1. What is the purpose of the AI Bill of Rights mentioned in the context? <br>2. When was the Blueprint for an AI Bill of Rights published?</code> <code>BLUEPRINT FOR AN <br>AI BILL OF <br>RIGHTS <br>MAKING AUTOMATED <br>SYSTEMS WORK FOR <br>THE AMERICAN PEOPLE <br>OCTOBER 2022</code> <code>1. What is the purpose of the Blueprint for an AI Bill of Rights published by the White House Office of Science and Technology Policy? <br>2. When was the Blueprint for an AI Bill of Rights released in relation to the announcement of the process to develop it?</code> <code>About this Document <br>The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was <br>published by the White House Office of Science and Technology Policy in October 2022. This framework was <br>released one year after OSTP announced the launch of a process to develop "a bill of rights for an AI-powered</code> <code>1. What initiative did the OSTP announce the launch of one year prior to the release mentioned in the context? <br>2. Where can the framework for the AI bill of rights be accessed online?</code> <code>released one year after OSTP announced the launch of a process to develop "a bill of rights for an AI-powered <br>world." Its release follows a year of public engagement to inform this initiative. The framework is available <br>online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights <br>About the Office of Science and Technology Policy <br>The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology</code> - 损失函数:<code>MatryoshkaLoss</code>,参数如下:
{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
训练超参数
非默认超参数
eval_strategy: stepsper_device_train_batch_size: 20per_device_eval_batch_size: 20multi_dataset_batch_sampler: round_robin
所有超参数
<details><summary>点击展开</summary>
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 20per_device_eval_batch_size: 20per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 3max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Falsehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseeval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseeval_use_gather_object: Falsebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin
</details>
训练日志
| Epoch | Step | cosine_map@100 |
|---|---|---|
| 1.0 | 25 | 0.9867 |
| 2.0 | 50 | 0.9867 |
| 3.0 | 75 | 0.9867 |
框架版本
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.19.1
引用
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
<!--
术语表
清楚地定义术语,以便让各领域的读者都能理解。 -->
<!--
模型卡作者
列出创建模型卡的人员,为模型卡构建中的详细工作提供认可和责任归属。 -->
<!--
模型卡联系方式
为有模型卡更新、建议或问题的人提供联系方式,以便与模型卡作者联系。 -->
lw2134/policy_gte_large
作者 lw2134
sentence-similarity
sentence-transformers
↓ 0
♥ 0
创建时间: 2024-10-02 18:59:51+00:00
更新时间: 2024-10-02 21:12:10+00:00
在 Hugging Face 上查看文件 (18)
.gitattributes
1_Pooling/config.json
README.md
config.json
config_sentence_transformers.json
model.safetensors
modules.json
onnx/config.json
onnx/model.onnx
ONNX
onnx/special_tokens_map.json
onnx/tokenizer.json
onnx/tokenizer_config.json
onnx/vocab.txt
sentence_bert_config.json
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.txt