返回模型
说明文档
基于 sentence-transformers/all-MiniLM-L6-v2 的 SentenceTransformer
这是一个基于 sentence-transformers/all-MiniLM-L6-v2 微调的 sentence-transformers 模型。它将句子和段落映射到 384 维的稠密向量空间,可用于语义文本相似度、语义搜索、复述挖掘、文本分类、聚类等任务。
模型详情
模型描述
- 模型类型: Sentence Transformer
- 基础模型: sentence-transformers/all-MiniLM-L6-v2 <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- 最大序列长度: 256 个 token
- 输出维度: 384 维
- 相似度函数: 余弦相似度 <!-- - 训练数据集: 未知 --> <!-- - 语言: 未知 --> <!-- - 许可证: 未知 -->
模型来源
- 文档: Sentence Transformers 文档
- 代码库: Sentence Transformers GitHub 仓库
- Hugging Face: Sentence Transformers on Hugging Face
完整模型架构
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
使用方法
直接使用 (Sentence Transformers)
首先安装 Sentence Transformers 库:
pip install -U sentence-transformers
然后你可以加载此模型并运行推理。
from sentence_transformers import SentenceTransformer
# 从 🤗 Hub 下载
model = SentenceTransformer("vazish/all-MiniLM-L6-v2-fine-tuned_0")
# 运行推理
sentences = [
'Tidal - High-Fidelity Music Streaming with Master Quality Audio',
'Walmart - Everyday Low Prices on Groceries, Electronics, and More',
'Notion - Integrated Workspace for Notes, Tasks, Databases, and Wikis',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# 获取嵌入的相似度分数
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
<!--
直接使用 (Transformers)
<details><summary>点击查看 Transformers 中的直接使用方法</summary>
</details> -->
<!--
下游使用 (Sentence Transformers)
你可以在自己的数据集上微调此模型。
<details><summary>点击展开</summary>
</details> -->
<!--
超出范围的使用
列出模型可能被预见到的滥用方式,并说明用户不应该用模型做什么。 -->
评估
指标
语义相似度
| 指标 | 值 |
|---|---|
| pearson_cosine | 0.9823 |
| spearman_cosine | 0.2608 |
<!--
偏见、风险和局限性
这个模型存在哪些已知或可预见的问题?你也可以在这里标记已知的失败案例或模型的弱点。 -->
<!--
建议
针对可预见的问题有什么建议?例如,过滤显式内容。 -->
训练详情
训练数据集
未命名数据集
- 大小: 49,800 个训练样本
- 列: <code>sentence_0</code>、<code>sentence_1</code> 和 <code>label</code>
- 基于前 1000 个样本的近似统计信息:
sentence_0 sentence_1 label type string string float details <ul><li>min: 10 tokens</li><li>mean: 14.76 tokens</li><li>max: 21 tokens</li></ul> <ul><li>min: 10 tokens</li><li>mean: 14.64 tokens</li><li>max: 21 tokens</li></ul> <ul><li>min: 0.0</li><li>mean: 0.04</li><li>max: 1.0</li></ul> - 样本:
sentence_0 sentence_1 label <code>TripAdvisor - Hotel Reviews, Photos, and Travel Forums</code> <code>Docker Hub - Container Image Repository for DevOps Environments</code> <code>0.0</code> <code>Mastodon - Decentralized Social Media for Niche Communities</code> <code>Allrecipes - User-Submitted Recipes, Reviews, and Cooking Tips</code> <code>0.0</code> <code>YouTube Music - Music Videos, Official Albums, and Live Performances</code> <code>ESPN - Sports News, Live Scores, Stats, and Highlights</code> <code>0.0</code> - 损失函数: 带有以下参数的 <code>CosineSimilarityLoss</code>:
{ \"loss_fct\": \"torch.nn.modules.loss.MSELoss\" }
训练超参数
非默认超参数
per_device_train_batch_size: 32per_device_eval_batch_size: 32multi_dataset_batch_sampler: round_robin
所有超参数
<details><summary>点击展开</summary>
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 32per_device_eval_batch_size: 32per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1num_train_epochs: 3max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: round_robin
</details>
训练日志
| Epoch | Step | Training Loss | spearman_cosine |
|---|---|---|---|
| 0.0372 | 500 | 0.0218 | - |
| 0.0745 | 1000 | 0.0151 | - |
| 0.1117 | 1500 | 0.0113 | - |
| 0.1490 | 2000 | 0.0076 | - |
| 0.1862 | 2500 | 0.0063 | - |
| 0.2234 | 3000 | 0.0054 | - |
| 0.2607 | 3500 | 0.0045 | - |
| 0.2979 | 4000 | 0.0041 | - |
| 0.3351 | 4500 | 0.0027 | - |
| 0.3724 | 5000 | 0.0028 | - |
| 0.4096 | 5500 | 0.0026 | - |
| 0.4469 | 6000 | 0.0021 | - |
| 0.4841 | 6500 | 0.0019 | - |
| 0.5213 | 7000 | 0.0022 | - |
| 0.5586 | 7500 | 0.0017 | - |
| 0.5958 | 8000 | 0.0018 | - |
| 0.6331 | 8500 | 0.0015 | - |
| 0.6703 | 9000 | 0.0015 | - |
| 0.7075 | 9500 | 0.0018 | - |
| 0.7448 | 10000 | 0.0014 | - |
| 0.7820 | 10500 | 0.0017 | - |
| 0.8192 | 11000 | 0.0012 | - |
| 0.8565 | 11500 | 0.0014 | - |
| 0.8937 | 12000 | 0.001 | - |
| 0.9310 | 12500 | 0.0011 | - |
| 0.9682 | 13000 | 0.001 | - |
| 1.0054 | 13500 | 0.0009 | - |
| 1.0427 | 14000 | 0.0011 | - |
| 1.0799 | 14500 | 0.001 | - |
| 1.1172 | 15000 | 0.0009 | - |
| 1.1544 | 15500 | 0.0008 | - |
| 1.1916 | 16000 | 0.001 | - |
| 1.2289 | 16500 | 0.0011 | - |
| 1.2661 | 17000 | 0.0011 | - |
| 1.3033 | 17500 | 0.0006 | - |
| 1.3406 | 18000 | 0.0011 | - |
| 1.3778 | 18500 | 0.0008 | - |
| 1.4151 | 19000 | 0.0011 | - |
| 1.4523 | 19500 | 0.0009 | - |
| 1.4895 | 20000 | 0.0011 | - |
| 1.5268 | 20500 | 0.0009 | - |
| 1.5640 | 21000 | 0.0009 | - |
| 1.6013 | 21500 | 0.0008 | - |
| 1.6385 | 22000 | 0.0005 | - |
| 1.6757 | 22500 | 0.001 | - |
| 1.7130 | 23000 | 0.0008 | - |
| 1.7502 | 23500 | 0.0007 | - |
| 1.7874 | 24000 | 0.0007 | - |
| 1.8247 | 24500 | 0.0008 | - |
| 1.8619 | 25000 | 0.001 | - |
| 1.8992 | 25500 | 0.0009 | - |
| 1.9364 | 26000 | 0.0008 | - |
| 1.9736 | 26500 | 0.0009 | - |
| 2.0109 | 27000 | 0.0007 | - |
| 2.0481 | 27500 | 0.0006 | - |
| 2.0854 | 28000 | 0.0007 | - |
| 2.1226 | 28500 | 0.0006 | - |
| 2.1598 | 29000 | 0.0007 | - |
| 2.1971 | 29500 | 0.001 | - |
| 2.2343 | 30000 | 0.0006 | - |
| 2.2715 | 30500 | 0.0006 | - |
| 2.3088 | 31000 | 0.001 | - |
| 2.3460 | 31500 | 0.0007 | - |
| 2.3833 | 32000 | 0.0008 | - |
| 2.4205 | 32500 | 0.0006 | - |
| 2.4577 | 33000 | 0.0007 | - |
| 2.4950 | 33500 | 0.0007 | - |
| 2.5322 | 34000 | 0.001 | - |
| 2.5694 | 34500 | 0.0007 | - |
| 2.6067 | 35000 | 0.0007 | - |
| 2.6439 | 35500 | 0.0008 | - |
| 2.6812 | 36000 | 0.0007 | - |
| 2.7184 | 36500 | 0.0006 | - |
| 2.7556 | 37000 | 0.0007 | - |
| 2.7929 | 37500 | 0.0007 | - |
| 2.8301 | 38000 | 0.0005 | - |
| 2.8674 | 38500 | 0.0009 | - |
| 2.9046 | 39000 | 0.0006 | - |
| 2.9418 | 39500 | 0.0007 | - |
| 2.9791 | 40000 | 0.0008 | - |
| -1 | -1 | - | 0.2608 |
框架版本
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
引用
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = \"Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks\",
author = \"Reimers, Nils and Gurevych, Iryna\",
booktitle = \"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing\",
month = \"11\",
year = \"2019\",
publisher = \"Association for Computational Linguistics\",
url = \"https://arxiv.org/abs/1908.10084\",
}
<!--
术语表
清晰地定义术语,以便让不同背景的读者都能理解。 -->
<!--
模型卡作者
列出创建模型卡的人员,为模型卡构建中投入的详细工作提供认可和责任说明。 -->
<!--
模型卡联系方式
为有模型卡更新、建议或问题的人提供联系方式,以便与模型卡作者沟通。 -->
vazish/all-MiniLM-L6-v2-fine-tuned
作者 vazish
sentence-similarity
sentence-transformers
↓ 0
♥ 0
创建时间: 2025-02-10 17:41:03+00:00
更新时间: 2025-02-10 22:30:02+00:00
在 Hugging Face 上查看文件 (21)
.gitattributes
1_Pooling/config.json
README.md
config.json
config_sentence_transformers.json
model.safetensors
modules.json
onnx/model.onnx
ONNX
onnx/model_bnb4.onnx
ONNX
onnx/model_fp16.onnx
ONNX
onnx/model_int8.onnx
ONNX
onnx/model_q4.onnx
ONNX
onnx/model_q4f16.onnx
ONNX
onnx/model_quantized.onnx
ONNX
onnx/model_uint8.onnx
ONNX
quantize_config.json
sentence_bert_config.json
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.txt