ONNX 模型库
返回模型

说明文档


tags:

  • sentence-transformers
  • sentence-similarity
  • feature-extraction
  • dense
  • generated_from_trainer
  • dataset_size:3
  • loss:MultipleNegativesRankingLoss base_model: sentence-transformers/all-MiniLM-L6-v2 pipeline_tag: sentence-similarity library_name: sentence-transformers

基于 sentence-transformers/all-MiniLM-L6-v2 的 SentenceTransformer

这是一个基于 sentence-transformers/all-MiniLM-L6-v2 微调的 sentence-transformers 模型。它将句子和段落映射到 384 维的稠密向量空间,可用于语义文本相似度、语义搜索、复述挖掘、文本分类、聚类等任务。

模型详情

模型描述

  • 模型类型: Sentence Transformer
  • 基础模型: sentence-transformers/all-MiniLM-L6-v2 <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
  • 最大序列长度: 256 个 token
  • 输出维度: 384 维
  • 相似度函数: 余弦相似度 <!-- - Training Dataset: Unknown --> <!-- - Language: Unknown --> <!-- - License: Unknown -->

模型来源

完整模型架构

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

使用方法

直接使用 (Sentence Transformers)

首先安装 Sentence Transformers 库:

pip install -U sentence-transformers

然后你可以加载此模型并运行推理。

from sentence_transformers import SentenceTransformer

# 从 🤗 Hub 下载
model = SentenceTransformer("sentence_transformers_model_id")
# 运行推理
sentences = [
    'The weather is lovely today.',
    "It's so sunny outside!",
    'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# 获取嵌入的相似度分数
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.6660, 0.1046],
#         [0.6660, 1.0000, 0.1411],
#         [0.1046, 0.1411, 1.0000]])

<!--

Direct Usage (Transformers)

<details><summary>Click to see the direct usage in Transformers</summary>

</details> -->

<!--

Downstream Usage (Sentence Transformers)

You can finetune this model on your own dataset.

<details><summary>Click to expand</summary>

</details> -->

<!--

Out-of-Scope Use

List how the model may foreseeably be misused and address what users ought not to do with the model. -->

<!--

Bias, Risks and Limitations

What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model. -->

<!--

Recommendations

What are recommendations with respect to the foreseeable issues? For example, filtering explicit content. -->

训练详情

训练数据集

未命名数据集

  • 大小:3 个训练样本
  • 列:<code>sentence_0</code> 和 <code>sentence_1</code>
  • 基于前 3 个样本的近似统计:
    sentence_0 sentence_1
    type string string
    details <ul><li>min: 4 tokens</li><li>mean: 4.0 tokens</li><li>max: 4 tokens</li></ul> <ul><li>min: 4 tokens</li><li>mean: 4.0 tokens</li><li>max: 4 tokens</li></ul>
  • 样本:
    sentence_0 sentence_1
    <code>Cooling type</code> <code>Cooling System</code>
    <code>Shelf type</code> <code>Shelf Material</code>
    <code>Interior lamp</code> <code>Interior Lighting</code>
  • 损失函数:<code>MultipleNegativesRankingLoss</code>,参数如下:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": false
    }
    

训练超参数

非默认超参数

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • multi_dataset_batch_sampler: round_robin

所有超参数

<details><summary>点击展开</summary>

  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin
  • router_mapping: {}
  • learning_rate_mapping: {}

</details>

框架版本

  • Python: 3.12.4
  • Sentence Transformers: 5.1.0
  • Transformers: 4.56.0
  • PyTorch: 2.8.0+cpu
  • Accelerate: 1.10.1
  • Datasets: 4.0.0
  • Tokenizers: 0.22.0

引用

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

<!--

Glossary

Clearly define terms in order to be accessible across audiences. -->

<!--

Model Card Authors

Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction. -->

<!--

Model Card Contact

Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors. -->

aimanfadillah/standardized-v2

作者 aimanfadillah

sentence-similarity sentence-transformers
↓ 1 ♥ 0

创建时间: 2025-09-03 07:17:43+00:00

更新时间: 2025-09-03 07:18:27+00:00

在 Hugging Face 上查看

文件 (13)

.gitattributes
1_Pooling/config.json
README.md
config.json
config_sentence_transformers.json
model.safetensors
modules.json
onnx/model.onnx ONNX
sentence_bert_config.json
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.txt