ONNX 模型库
返回模型

说明文档

I'll fetch the README from the HuggingFace model page and translate it to Chinese (Simplified). The README.md appears to be empty on the model page. Let me fetch the raw README file directly: The README.md only contains metadata with no descriptive content. Here's the translated content:

---
license: apache-2.0
language:
- en
base_model:
- guozinan/PuLID
library_name: pytorch
pipeline_tag: image-to-image
---

说明: 该模型的 README.md 文件仅包含元数据,没有实际的说明文档内容。文件表明这是一个基于 guozinan/PuLID 的图像到图像模型,使用 PyTorch 框架,采用 Apache 2.0 许可证。

adorabook/pulid-flux-adorabook

作者 adorabook

image-to-image pytorch
↓ 0 ♥ 0

创建时间: 2024-12-26 10:30:29+00:00

更新时间: 2025-01-06 12:21:12+00:00

在 Hugging Face 上查看

文件 (87)

.gitattributes
.gitignore
LICENSE
README.md
app.py
app_flux.py
app_v1_1.py
docs/pulid_for_flux.md
docs/pulid_v1.1.md
docs/v1.1_preview.md
eva_clip/__init__.py
eva_clip/bpe_simple_vocab_16e6.txt.gz
eva_clip/constants.py
eva_clip/eva_vit_model.py
eva_clip/factory.py
eva_clip/hf_configs.py
eva_clip/hf_model.py
eva_clip/loss.py
eva_clip/model.py
eva_clip/model_configs/EVA01-CLIP-B-16.json
eva_clip/model_configs/EVA01-CLIP-g-14-plus.json
eva_clip/model_configs/EVA01-CLIP-g-14.json
eva_clip/model_configs/EVA02-CLIP-B-16.json
eva_clip/model_configs/EVA02-CLIP-L-14-336.json
eva_clip/model_configs/EVA02-CLIP-L-14.json
eva_clip/model_configs/EVA02-CLIP-bigE-14-plus.json
eva_clip/model_configs/EVA02-CLIP-bigE-14.json
eva_clip/modified_resnet.py
eva_clip/openai.py
eva_clip/pretrained.py
eva_clip/rope.py
eva_clip/timm_model.py
eva_clip/tokenizer.py
eva_clip/transform.py
eva_clip/transformer.py
eva_clip/utils.py
example_inputs/hinton.jpeg
example_inputs/lecun.jpg
example_inputs/lifeifei.jpg
example_inputs/liuyifei.png
example_inputs/pengwei.jpg
example_inputs/rihanna.webp
example_inputs/zcy.webp
flux/__init__.py
flux/math.py
flux/model.py
flux/modules/__init__.py
flux/modules/autoencoder.py
flux/modules/conditioner.py
flux/modules/layers.py
flux/sampling.py
flux/util.py
handler.py
models/.cache/huggingface/.gitignore
models/.cache/huggingface/download/pulid_v1.1.safetensors.lock
models/.cache/huggingface/download/pulid_v1.1.safetensors.metadata
models/.gitkeep
models/antelopev2/.cache/huggingface/.gitignore
models/antelopev2/.cache/huggingface/download/.gitattributes.lock
models/antelopev2/.cache/huggingface/download/.gitattributes.metadata
models/antelopev2/.cache/huggingface/download/1k3d68.onnx.lock
models/antelopev2/.cache/huggingface/download/1k3d68.onnx.metadata
models/antelopev2/.cache/huggingface/download/2d106det.onnx.lock
models/antelopev2/.cache/huggingface/download/2d106det.onnx.metadata
models/antelopev2/.cache/huggingface/download/genderage.onnx.lock
models/antelopev2/.cache/huggingface/download/genderage.onnx.metadata
models/antelopev2/.cache/huggingface/download/glintr100.onnx.lock
models/antelopev2/.cache/huggingface/download/glintr100.onnx.metadata
models/antelopev2/.cache/huggingface/download/scrfd_10g_bnkps.onnx.lock
models/antelopev2/.cache/huggingface/download/scrfd_10g_bnkps.onnx.metadata
models/antelopev2/.gitattributes
models/antelopev2/1k3d68.onnx ONNX
models/antelopev2/2d106det.onnx ONNX
models/antelopev2/genderage.onnx ONNX
models/antelopev2/glintr100.onnx ONNX
models/antelopev2/scrfd_10g_bnkps.onnx ONNX
models/pulid_v1.1.safetensors
pulid/attention_processor.py
pulid/encoders.py
pulid/encoders_transformer.py
pulid/pipeline.py
pulid/pipeline_flux.py
pulid/pipeline_v1_1.py
pulid/utils.py
pyproject.toml
requirements.txt
requirements_fp8.txt