返回模型
说明文档
基于 StableAnimator 的检查点
如果您觉得 StableAnimator 有用,<b>请考虑给这个 GitHub 仓库点个 star 并引用它</b>:
@article{tu2024stableanimator,
title={StableAnimator: High-Quality Identity-Preserving Human Image Animation},
author={Shuyuan Tu and Zhen Xing and Xintong Han and Zhi-Qi Cheng and Qi Dai and Chong Luo and Zuxuan Wu},
journal={arXiv preprint arXiv:2411.17697},
year={2024}
}
dhlee3000/music2dance_cont
作者 dhlee3000
image-to-video
diffusers
↓ 0
♥ 0
创建时间: 2025-10-16 13:55:51+00:00
更新时间: 2025-10-16 14:25:16+00:00
在 Hugging Face 上查看文件 (58)
.gitattributes
Animation/checkpoint-40000/face_encoder-40000.pth
Animation/checkpoint-40000/model.safetensors
Animation/checkpoint-40000/model_1.safetensors
Animation/checkpoint-40000/model_2.safetensors
Animation/checkpoint-40000/music_encoder-40000.pth
Animation/checkpoint-40000/optimizer.bin
Animation/checkpoint-40000/random_states_0.pkl
Animation/checkpoint-40000/scaler.pt
Animation/checkpoint-40000/scheduler.bin
Animation/checkpoint-40000/unet-40000.pth
Animation/face_encoder.pth
Animation/glintr100_torch.pth
Animation/pose_net.pth
Animation/unet.pth
DWPose/dw-ll_ucoco_384.onnx
ONNX
DWPose/yolox_l.onnx
ONNX
README.md
assets/figures/case-17.gif
assets/figures/case-18.gif
assets/figures/case-24.gif
assets/figures/case-35.gif
assets/figures/case-42.gif
assets/figures/case-45.gif
assets/figures/case-46.gif
assets/figures/case-47.gif
assets/figures/case-5.gif
assets/figures/case-61.gif
assets/figures/framework.jpg
assets/gif/case-35.gif
assets/gif/case-42.gif
config.json
inference.zip
models/antelopev2/.gitattributes
models/antelopev2/1k3d68.onnx
ONNX
models/antelopev2/2d106det.onnx
ONNX
models/antelopev2/genderage.onnx
ONNX
models/antelopev2/glintr100.onnx
ONNX
models/antelopev2/scrfd_10g_bnkps.onnx
ONNX
stable-video-diffusion-img2vid-xt/.gitattributes
stable-video-diffusion-img2vid-xt/LICENSE.md
stable-video-diffusion-img2vid-xt/README.md
stable-video-diffusion-img2vid-xt/comparison.png
stable-video-diffusion-img2vid-xt/feature_extractor/preprocessor_config.json
stable-video-diffusion-img2vid-xt/image_encoder/config.json
stable-video-diffusion-img2vid-xt/image_encoder/model.fp16.safetensors
stable-video-diffusion-img2vid-xt/image_encoder/model.safetensors
stable-video-diffusion-img2vid-xt/model_index.json
stable-video-diffusion-img2vid-xt/output_tile.gif
stable-video-diffusion-img2vid-xt/scheduler/scheduler_config.json
stable-video-diffusion-img2vid-xt/svd_xt.safetensors
stable-video-diffusion-img2vid-xt/svd_xt_image_decoder.safetensors
stable-video-diffusion-img2vid-xt/unet/config.json
stable-video-diffusion-img2vid-xt/unet/diffusion_pytorch_model.fp16.safetensors
stable-video-diffusion-img2vid-xt/unet/diffusion_pytorch_model.safetensors
stable-video-diffusion-img2vid-xt/vae/config.json
stable-video-diffusion-img2vid-xt/vae/diffusion_pytorch_model.fp16.safetensors
stable-video-diffusion-img2vid-xt/vae/diffusion_pytorch_model.safetensors