返回模型
暂无说明文档
Sean-Bradley/ComfyUI
作者 Sean-Bradley
↓ 111
♥ 0
创建时间: 2025-10-15 22:15:14+00:00
更新时间: 2026-03-16 14:00:11+00:00
在 Hugging Face 上查看文件 (156)
.gitattributes
models/SEEDVR2/.validation_cache.json
models/SEEDVR2/ema_vae_fp16.safetensors
models/SEEDVR2/seedvr2_ema_3b_fp8_e4m3fn.safetensors
models/SEEDVR2/seedvr2_ema_7b_fp8_e4m3fn_mixed_block35_fp16.safetensors
models/animatediff_models/AnimateLCM_sd15_t2v.safetensors
models/animatediff_models/animatediff_lightning_1step_diffusers.safetensors
models/animatediff_models/animatediff_lightning_2step_diffusers.safetensors
models/animatediff_models/animatediff_lightning_4step_diffusers.safetensors
models/animatediff_models/animatediff_lightning_8step_diffusers.safetensors
models/animatediff_models/mm_sdxl_v10_beta.ckpt
models/audio_encoders/wav2vec2_large_english_fp16.safetensors
models/audio_encoders/wav2vec2_large_english_fp8_e4m3fn.safetensors
models/checkpoints/512-inpainting-ema.safetensors
models/checkpoints/Juggernaut-XI-byRunDiffusion-Lightning.safetensors
models/checkpoints/absolutereality_v181.safetensors
models/checkpoints/ace_step_v1_3.5b.safetensors
models/checkpoints/anythingV3_fp16.safetensors
models/checkpoints/architecturerealmix_v11.safetensors
models/checkpoints/cardosAnime_v20.safetensors
models/checkpoints/dreamCreationVirtual3DECommerce_v10.safetensors
models/checkpoints/epiCrealism.safetensors
models/checkpoints/flux1-dev-fp8.safetensors
models/checkpoints/flux1-schnell-fp8.safetensors
models/checkpoints/interiordesignsuperm_v2.safetensors
models/checkpoints/ltxv-13b-0.9.8-distilled-fp8.safetensors
models/checkpoints/ltxv-2b-0.9.8-distilled-fp8.safetensors
models/checkpoints/majicmixRealistic_v7.safetensors
models/checkpoints/realisticVisionV60B1_v51HyperVAE.safetensors
models/checkpoints/realvisxlV50_v50LightningBakedvae.safetensors
models/checkpoints/sd3.5_large_fp8_scaled.safetensors
models/checkpoints/sd_xl_base_1.0.safetensors
models/checkpoints/stableVideoDiffusion_img2vid.safetensors
models/checkpoints/stableVideoDiffusion_img2vidXt.safetensors
models/checkpoints/stableVideoDiffusion_img2vidXt11.safetensors
models/checkpoints/svd.safetensors
models/checkpoints/svd_xt.safetensors
models/checkpoints/svd_xt_1_1.safetensors
models/checkpoints/v1-5-pruned-emaonly-fp16.safetensors
models/checkpoints/v2-1_512-ema-pruned.safetensors
models/checkpoints/v2-1_768-ema-pruned.safetensors
models/checkpoints/wildcardxXLTURBO_wildcardxXLTURBOV10.safetensors
models/checkpoints/zavychromaxl_v100.safetensors
models/checkpoints/欧美动漫ToonYou.safetensors
models/clip/clip_l.safetensors
models/clip/qwen_2.5_vl_7b_fp8_scaled.safetensors
models/clip/t5-v1_1-xxl-encoder-Q8_0.gguf
models/clip/t5xxl_fp16.safetensors
models/clip/t5xxl_fp8_e4m3fn.safetensors
models/clip/t5xxl_fp8_e4m3fn_scaled.safetensors
models/clip/umt5-xxl-enc-bf16.safetensors
models/clip/umt5-xxl-encoder-Q8_0.gguf
models/clip/umt5_xxl_fp8_e4m3fn_scaled.safetensors
models/clip_vision/CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
models/clip_vision/clip-vit-large-patch14-336.bin
models/clip_vision/flux_clip_config.json
models/controlnet/control-lora-canny-rank256.safetensors
models/controlnet/control-lora-depth-rank256.safetensors
models/controlnet/control-lora-openposeXL2-rank256.safetensors
models/controlnet/control_v11f1p_sd15_depth_fp16.safetensors
models/controlnet/control_v11p_sd15_openpose_fp16.safetensors
models/controlnet/control_v11p_sd15_scribble_fp16.safetensors
models/controlnet/flux-canny-controlnet-v3.safetensors
models/controlnet/flux-depth-controlnet-v3.safetensors
models/controlnet/flux_union_controlnet.safetensors
models/controlnet/put_controlnets_and_t2i_here
models/detection/vitpose_h_wholebody_data.bin
models/detection/vitpose_h_wholebody_model.onnx
ONNX
models/detection/yolov10m.onnx
ONNX
models/diffusion_models/Qwen-Image-Edit-2511-FP8_e4m3fn.safetensors
models/diffusion_models/controlnext-svd_v2-unet-fp16_converted.safetensors
models/diffusion_models/flux-2-klein-9b-Q8_0.gguf
models/diffusion_models/qwen_image_2512_fp8_e4m3fn.safetensors
models/diffusion_models/qwen_image_edit_2509_fp8_e4m3fn.safetensors
models/diffusion_models/qwen_image_fp8_e4m3fn.safetensors
models/diffusion_models/wan2.2_ti2v_5B_fp16.safetensors
models/face_parsing/parsing_bisenet.pth
models/ipadapter-flux/ip-adapter.bin
models/ipadapter/ip-adapter-plus_sd15.safetensors
models/ipadapter/ip-adapter-plus_sdxl_vit-h.safetensors
models/loras/Hyper-FLUX.1-dev-8steps-lora.safetensors
models/loras/LCM_LoRA_SD15.safetensors
models/loras/LCM_LoRA_SDXL.safetensors
models/loras/LCM_LoRA_SSB.safetensors
models/loras/Qwen-Edit-2509-Multiple-angles.safetensors
models/loras/Qwen-Image-Edit-2509-Light-Migration.safetensors
models/loras/Qwen-Image-Edit-2509-Lightning-4steps-V1.0-bf16.safetensors
models/loras/Qwen-Image-Lightning-4steps-V1.0.safetensors
models/loras/Qwen-Image-Lightning-4steps-V2.0-bf16.safetensors
models/loras/Qwen-Image-Lightning-8steps-V2.0-bf16.safetensors
models/loras/Replace_it_lora.safetensors
models/loras/SBCODE_CosmoCat_20.safetensors
models/loras/SBCODE_Emmy_Qwen_2512_20.safetensors
models/loras/SBCODE_Maria_20.safetensors
models/loras/SBCODE_Pria_20.safetensors
models/loras/SBCODE_Sofia_Qwen_2512_20.safetensors
models/loras/Wan2.2-I2V-A14B-lora-high_noise.safetensors
models/loras/Wan2.2-I2V-A14B-lora-low_noise.safetensors
models/loras/Wan2.2-T2V-A14B-lora-high_noise.safetensors
models/loras/Wan2.2-T2V-A14B-lora-low_noise.safetensors
models/loras/ip-adapter-faceid-plus_sd15_lora.safetensors
models/loras/ltxv-097-ic-lora-pose-control-comfyui.safetensors
models/loras/wan2.2_animate_14B_relight_lora_bf16.safetensors
models/mmaudio/apple_DFN5B-CLIP-ViT-H-14-384_fp16.safetensors
models/mmaudio/mmaudio_large_44k_v2_fp16.safetensors
models/mmaudio/mmaudio_synchformer_fp16.safetensors
models/mmaudio/mmaudio_vae_44k_fp16.safetensors
models/sam2/sam2.1_hiera_base_plus-fp16.safetensors
models/sam2/sam2.1_hiera_tiny-fp16.safetensors
models/sam2/sam2_hiera_base_plus.safetensors
models/sonic/RIFE/flownet.pkl
models/sonic/audio2bucket.pth
models/sonic/audio2token.pth
models/sonic/face_yolov8m.pt
models/sonic/unet.pth
models/sonic/whisper-tiny/config.json
models/sonic/whisper-tiny/model.safetensors
models/sonic/whisper-tiny/preprocessor_config.json
models/text_encoders/Qwen3-4B-Q4_K_M.gguf
models/text_encoders/Qwen3-4B-Q5_K_M.gguf
models/text_encoders/qwen_3_4b.safetensors
models/tts/chatterbox/conds.pt
models/tts/chatterbox/s3gen.safetensors
models/tts/chatterbox/t3_cfg.safetensors
models/tts/chatterbox/tokenizer.json
models/tts/chatterbox/ve.safetensors
models/unet/Qwen-Image-Edit-2509-Q3_K_M.gguf
models/unet/Wan2.2-Animate-14B-Q8_0.gguf
models/unet/Wan2.2-Fun-A14B-Control_HighNoise-Q8_0.gguf
models/unet/Wan2.2-Fun-A14B-Control_LowNoise-Q8_0.gguf
models/unet/Wan2.2-I2V-A14B-HighNoise-Q8_0.gguf
models/unet/Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf
models/unet/Wan2.2-S2V-14B-Q8_0.gguf
models/unet/Wan2.2-T2V-A14B-HighNoise-Q8_0.gguf
models/unet/Wan2.2-T2V-A14B-LowNoise-Q8_0.gguf
models/unet/Wan2.2-TI2V-5B-Q8_0.gguf
models/unet/Wan2.2-VACE-Fun-A14B-high-noise-Q8_0.gguf
models/unet/Wan2.2-VACE-Fun-A14B-low-noise-Q8_0.gguf
models/unet/Wan2_2-Animate-14B_fp8_scaled_e4m3fn_KJ_v2.safetensors
models/unet/flux1-kontext-dev-Q2_K.gguf
models/unet/flux1-kontext-dev-Q8_0.gguf
models/unet/flux1-krea-dev-Q8_0.gguf
models/unet/flux2-dev-Q3_K_M.gguf
models/unet/z_image_turbo-Q3_K_M.gguf
models/upscale_models/4x-ClearRealityV1.pth
models/upscale_models/RealESRGAN_x2.pth
models/upscale_models/RealESRGAN_x4.pth
models/upscale_models/RealESRGAN_x4plus.pth
models/upscale_models/RealESRGAN_x8.pth
models/vae/ae.safetensors
models/vae/flux2-vae.safetensors
models/vae/qwen_image_vae.safetensors
models/vae/vae-ft-mse-840000-ema-pruned.safetensors
models/vae/wan2.2_vae.safetensors
models/vae/wan_2.1_vae.safetensors
models/yolo/model.pt