ONNX 模型库
返回模型

说明文档

image Hits Patreon BuyMeACoffee Furkan Gözükara Medium Codio Furkan Gözükara Medium

YouTube Channel Furkan Gözükara LinkedIn Udemy Twitter Follow Furkan Gözükara

生成式AI专家级教程

大家好。我是Furkan Gözükara博士。我是一名计算机工程博士,目前担任助理教授,同时也是全职生成式AI研究员、开发者和教程制作者

SECourses是一个专注于以下主题的YouTube频道:科技、AI、新闻、科学、机器人、奇点、ComfyUI、SwarmUI、机器学习、人工智能、人形机器人、Wan 2.2、FLUX、Krea、Qwen Image、视觉语言模型、Stable Diffusion、SDXL、SeedVR2、TOPAZ、SUPIR、ChatGPT、Gemini、大语言模型、Claude、编程、智能体、代理式AI、动画、深度伪造、Fooocus、ControlNet、RunPod、Massed Compute、Windows、硬件、图像修复、云计算、Kaggle、Colab、Automatic1111、SD Web UI、TensorRT、DreamBooth、LoRA、训练、微调、Kohya、OneTrainer、超分辨率、3D、Musubi Tuner、教程、Qwen Image Edit、图像超分辨率、视频超分辨率、语音合成、声音训练、文本转语音、文本转音乐、图像转图像、文本转视频、视频转视频、风格迁移、风格训练、FLUX Kontext、换脸、对口型、文本转3D、头像生成、3D生成、通用人工智能、提示工程、工程、Gradio、CUDA、GGUF、量化、GPT-5、Whisper等更多内容

我们的平台链接

1️⃣ SECourses YouTube(48,000+ 订阅者)必关注 ⤵️

1️⃣ https://www.youtube.com/@SECourses


2️⃣ SECourses Patreon(25,000+ 订阅者)必关注 ⤵️

2️⃣ https://www.patreon.com/c/SECourses


3️⃣ SECourses Discord(10,000+ 成员)必加入 ⤵️

3️⃣ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388


LinkedIn : https://www.linkedin.com/in/furkangozukara

Twitter : https://twitter.com/GozukaraFurkan

Linktr : https://linktr.ee/FurkanGozukara

Google Scholar : https://scholar.google.com/citations?user=_2_KAUsAAAAJ&hl=en

Mastodon : https://mastodon.social/@furkangozukara


我们获得2,500+星标的GitHub Stable Diffusion及其他教程仓库 ⤵️

https://github.com/FurkanGozukara/Stable-Diffusion


关于此仓库

我正在保持此列表的更新。我有一些即将推出的精彩视频创意,正在努力抽出时间来实现。

我欢迎您的任何批评。我一直在努力提高教程视频的质量。请在评论区留下您的建议以及您希望在未来的视频中看到的内容。

所有视频都配有手动修正的字幕和精心准备的视频章节。您可以观看这些完美的字幕,或者直接跳转到您感兴趣的章节。

由于我的职业是教学,我通常不会跳过任何重要部分。因此,您可能会发现我的视频稍微有些长。

YouTube播放列表链接:Stable Diffusion教程、Automatic1111 Web UI和Google Colab指南、DreamBooth、文本反转/嵌入、LoRA、AI超分辨率、视频转动漫

教程视频

1. 如何安装Python、设置虚拟环境VENV、设置默认Python系统路径及安装Git<br>image 2. 使用开源自动安装程序在PC上安装和运行Stable Diffusion Web UI的最简单方法<br>image
3. 如何在Web UI中使用Stable Diffusion V2.1和不同模型 - SD 1.5 vs 2.1 vs Anything V3<br>image 4. 使用Automatic1111 Web UI从零到精通的Stable Diffusion DreamBooth教程 - 超详细<br>image

5. DreamBooth大幅增强 - 1月22日更新 - 更好的成功率训练Stable Diffusion模型Web UI<br>image 6. 如何将您训练的主题(如您的脸)注入到任何自定义Stable Diffusion模型中 - Web UI<br>image
7. 如何在Web UI中对不同模型进行Stable Diffusion LoRA训练 - 已测试SD 1.5、SD 2.1<br>image 8. 8GB LoRA训练 - 修复Automatic1111 SD UI中DreamBooth和文本反转的CUDA及xformers问题<br>image

9. 如何通过Automatic1111 Web UI教程进行Stable Diffusion文本反转(TI)/文本嵌入<br>image 10. 如何使用Stable Diffusion AI生成惊艳的史诗级文字 - 无需Photoshop - 免费 - 深度转图像<br>image
11. 如何运行和转换Stable Diffusion Diffusers(.bin权重)及Dreambooth模型为CKPT文件<br>image 12. 告别Photoshop - 如何在NMKD GUI中使用InstructPix2Pix模型通过文字提示转换图像<br>image

13. 使用Stable Diffusion将自拍变成惊艳的AI头像 - 比Lensa更好且免费<br>image 14. Stable Diffusion Google Colab、继续训练、目录、传输、克隆、自定义模型、CKPT SafeTensors<br>image
15. 使用DAAM成为Stable Diffusion提示词大师 - 每个使用标记/单词的注意力热力图<br>image 16. 使用Stable Diffusion ControlNet AI将草图变成杰作 - 使用教程<br>image

17. 一键将草图变成史诗级艺术:Automatic1111 Web UI中Stable Diffusion ControlNet指南<br>image 18. Stable Diffusion终极RunPod教程 - Automatic1111 - 数据传输、扩展、CivitAI<br>image
19. 如何在RunPod上安装DreamBooth和Automatic1111及最新库 - 2倍加速 - cudDNN - CUDA<br>image](https://youtu.be/c_S2kFAefTQ) 20. 神奇的新ControlNet OpenPose编辑器扩展和图像混合 - Stable Diffusion Web UI教程<br>image

21. Automatic1111 Stable Diffusion DreamBooth指南:最佳分类图像数量对比测试<br>image 22. 史诗级Web UI DreamBooth更新 - 新的最佳设置 - 在RunPods上对比10种Stable Diffusion训练<br>image
23. Automatic1111 Stable Diffusion新风格迁移扩展,ControlNet T2I-Adapter颜色控制<br>image 24. 使用ControlNet Stable Diffusion Web UI免费生成文字艺术和精美Logo教程<br>image

25. 如何在Automatic1111 Web UI PC上安装新版DREAMBOOTH和Torch 2以获得史诗级性能提升指南<br>image 26. 通过DreamBooth Stable Diffusion训练Midjourney级别的风格和您自己到SD 1.5模型<br>image
27. 视频转动漫 - 使用Stable Diffusion AI从手机录制生成史诗级动画<br>image 28. Midjourney级别的新开源Kandinsky 2.1超越Stable Diffusion - 安装和使用指南<br>image

29. RTX 3090 vs RTX 3060 Stable Diffusion、机器学习、AI和视频渲染性能终极对决<br>image 30. 通过Kohya LoRA Stable Diffusion训练生成工作室级质量真实照片 - 完整教程<br>image
31. Stability AI的DeepFloyd IF - 是Stable Diffusion XL还是版本3?我们评测并展示如何使用<br>image 32. 如何使用DeepFace AI找到最佳Stable Diffusion生成图像 - DreamBooth / LoRA训练<br>image

33. 令人惊叹的深度伪造教程:将任何人变成您最爱的电影明星!PC和Google Colab - roop<br>image 34. Stable Diffusion现在拥有Photoshop生成式填充功能 - ControlNet扩展 - 教程<br>image
35. Stable Diffusion DreamBooth / LoRA的人物裁剪脚本和4K+分辨率分类/正则化图像<br>image 36. Stable Diffusion 2新图像后处理脚本和最佳分类/正则化图像数据集<br>image

37. 如何在RunPod上使用Roop深度伪造分步教程 - 附带自制自动安装脚本<br>image 38. 从零到精通ControlNet教程:Stable Diffusion Web UI扩展 - 完整功能指南<br>image
39. 摄影的终结 - 使用AI制作您自己的工作室照片,通过DreamBooth训练免费实现<br>image 40. 如何在Google Colab上免费使用Stable Diffusion XL(SDXL 0.9)<br>image

41. 在您的PC上本地运行Stable Diffusion XL(SDXL) - 8GB显存 - 简单教程附带自动安装程序<br>image 42. RunPod上使用SDXL教程。自动安装程序和Refiner及惊人的原生Diffusers Gradio<br>image
43. ComfyUI大师教程 - Stable Diffusion XL(SDXL) - 在PC、Google Colab(免费)和RunPod上安装<br>image 44. 首个SDXL训练使用Kohya LoRA - Stable Diffusion XL训练将取代旧模型<br>image

45. 如何在Automatic1111 Web UI中使用SDXL - SD Web UI vs ComfyUI - 简单本地安装教程/指南<br>image 46. 如何在RunPod上使用Automatic1111 Web UI运行Stable Diffusion X-Large(SDXL) - 简单教程<br>image
47. 成为Kohya SS LoRAs的SDXL训练大师 - 结合Automatic1111和SDXL LoRAs的强大功能<br>image 48. 如何在RunPod上使用Kohya SS GUI训练器进行SDXL LoRA训练并在Automatic1111 UI中使用LoRA<br>image

49. 如何使用Kohya LoRA免费进行SDXL训练 - Kaggle - 无需GPU - 完胜Google Colab<br>image 50. 如何在Kaggle上像Google Colab一样免费使用Stable Diffusion、SDXL、ControlNet、LoRA(无需GPU)<br>image
51. 只需一键将视频变成动画 - ReRender A Video教程 - Windows安装程序<br>image 52. 只需一键将视频变成动画/3D - ReRender A Video教程 - RunPod安装程序<br>image

53. 使用RTX加速TensorRT将Stable Diffusion推理速度翻倍:综合指南<br>image 54. 如何在RunPod、Unix、Linux上安装和运行TensorRT以获得2倍更快的Stable Diffusion推理速度<br>image
55. Stable Diffusion训练的SOTA图像预处理脚本 - 自动主体裁剪和人脸聚焦<br>image 56. Fooocus Stable Diffusion Web UI - 像使用Midjourney一样使用SDXL - 易用高质量<br>image

57. 如何免费进行Stable Diffusion XL(SDXL)DreamBooth训练 - 利用Kaggle - 简单教程<br>image 58. PIXART-α:首个Midjourney的开源竞争对手 - 比Stable Diffusion SDXL更好 - 完整教程<br>image
59. 必备AI工具和库:Python、Git、C++编译工具、FFmpeg、CUDA、PyTorch指南<br>image 60. MagicAnimate:使用扩散模型的时间一致性人物图像动画 - 完整教程<br>image

61. 使用IP-Adapter-FaceID即时换脸:Windows、RunPod和Kaggle的完整教程和GUI<br>image 62. 160+最佳Stable Diffusion 1.5自定义模型详细对比和一键下载所有模型的脚本<br>image
63. SUPIR:新的SOTA开源图像超分辨率和增强模型,优于Magnific和Topaz AI教程<br>image 64. Windows和云端使用OneTrainer的完整Stable Diffusion SD和XL微调教程 - 从零到精通<br>image

65. 使用Incantations扩展显著提高Stable Diffusion提示词遵循和图像质量<br>image 66. 在您的PC上使用SUPIR增强和超分辨率图像的完整指南,如同科幻电影<br>image
67. IDM-VTON:最神奇的虚拟服装试穿应用 - 开源 - 一键安装和使用<br>image 68. IDM-VTON:最神奇的虚拟服装试穿应用 - RunPod - Massed Compute - Kaggle<br>image

69. Windows版Stable Cascade完整教程 - SD3的前身 - 一键安装的惊人Gradio应用<br>image 70. 云端版Stable Cascade完整教程 - SD3的前身 - Massed Compute、RunPod和Kaggle<br>image
71. 如何从CivitAI和Hugging Face(HF)下载(wget)模型并上传到HF包括私有仓库<br>image 72. 使用最新NVIDIA驱动程序测试Stable Diffusion推理性能,包括TensorRT ONNX<br>image

73. 令人惊叹的深度伪造教程:将任何人变成您最爱的电影明星!比Roop和Face Fusion更好<br>image 74. 最佳开源深度伪造应用ROPE - 如此易用的全高清换脸深度伪造,无需GPU云端<br>image
75. V-Express:一键AI头像说话视频动画生成器 - 类似D-ID - 免费开源<br>image 76. V-Express一键AI说话头像生成器 - 类似D-ID - Massed Compute、RunPod和Kaggle指南<br>image

77. 从零到精通的Stable Diffusion 3教程,配合使用ComfyUI的惊人SwarmUI SD Web UI<br>image 78. 如何在云服务Kaggle(免费)、Massed Compute和RunPod上使用SwarmUI和Stable Diffusion 3<br>image
79. 使用LivePortrait AI将静态照片动画化为说话视频,快速构建完美表情<br>image 80. LivePortrait:无GPU云端教程 - RunPod、MassedCompute和免费Kaggle账户 - 动画化图像<br>image

81. Kling AI视频终于公开(所有国家),免费使用且令人惊叹 - 完整教程<br>image 82. FLUX:首个真正击败Midjourney等的开源txt2img模型 - FLUX是期待已久的SD3<br>image
83. SUPIR在线 - 官方开发者提供的终极图像超分辨率工具 - 完整教程 - SUPIR 2即将推出<br>image 84. FLUX LoRA训练简化:使用Kohya SS GUI从零到精通(8GB GPU,Windows)教程指南<br>image

85. Massed Compute和RunPod上极速且超便宜的FLUX LoRA训练教程 - 无需GPU!<br>image 86. Invoke AI完整安装和运行教程 - Windows、RunPod和Massed Compute - 一键简易指南<br>image
87. AI应用程序的Python、CUDA、cuDNN、C++ Build Tools、FFMPEG和Git安装教程<br>image 88. MimicPC完整使用教程 - 通过MimicPC服务器在浏览器中运行最佳AI应用<br>image

89. 如何使用Cloudflare Zero Trust免费Warp VPN仅为单个应用启用VPN - 分流<br>image 90. Windows、RunPod和Massed Compute的FLUX完整微调/DreamBooth训练大师教程<br>image
91. Stable Diffusion 3.5 Large使用教程,附带最佳配置和与FLUX DEV的对比<br>image 92. 如何在Windows PC、RunPod和Massed Compute上使用Mochi 1开源视频生成模型<br>image

93. FLUX Tools扩展、修复(填充)、Redux、深度和Canny终极教程指南,配合SwarmUI<br>image 94. 最佳开源图像转视频生成器CogVideoX1.5-5B-I2V分步Windows和云端教程<br>image
95. SANA:来自NVIDIA的超高清快速文本转图像模型,Windows、云端和Kaggle分步教程<br>image 96. NVIDIA SANA 4K:令人惊叹的16MP文本转图像AI模型可在8GB GPU上运行 - 颠覆性技术<br>image

97. MSI RTX 5090 TRIO FurMark基准测试 + 超频 + 噪音测试并与RTX 3090 TI对比<br>image 98. RTX 5090测试FLUX DEV、SD 3.5 Large、SD 3.5 Medium、SDXL、SD 1.5,AMD 9950X + RTX 3090 TI<br>image
99. SwarmUI免费Kaggle账户笔记本完整教程 - SD 1.5、SDXL、SD 3.5、FLUX、Hunyuan、SkyReels<br>image 100. ChatGPT(大语言模型)如何工作 - 精彩的图解视频<br>image

101. Wan 2.1 AI视频模型:Windows和经济实惠的私有云端设置终极分步教程<br>image 102. 超高级Wan 2.1应用更新和著名的挤压效果,本地生成挤压视频<br>image
103. Sony AI的MMAudio完整教程 - 视频、图像和文本的开源AI音频生成器<br>image 104. FramePack完整教程:Windows上一键安装 - 使用6GB显存生成最长120秒的图像转视频<br>image

105. 使用SwarmUI(ComfyUI后端)掌握本地AI艺术和视频生成:2025终极教程<br>image 106. TRELLIS分步教程,从静态图像本地生成惊人高质量的3D资产<br>image
107. 将任何服装转移到新人身上并将任何人变成3D人物 - ComfyUI教程<br>image 108. 使用CausVid LoRA极速加速的SwarmUI Wan 2.1文本转视频T2V和图像转视频I2V教程<br>image

109. SwarmUI Teacache完整教程,附带最佳Wan 2.1 I2V和T2V预设 - 使用ComfyUI作为后端<br>image 110. VEO 3 FLOW完整教程 - 如何在FLOW中使用VEO3指南<br>image
111. Wan 2.1的CausVid LoRA V2带来大幅质量提升、更好的色彩和饱和度<br>image 112. Hi3DGen完整教程,附带超高级应用,从静态图像生成最佳3D网格<br>image

113. RunPod上的终极ComfyUI和SwarmUI教程,附带RTX 5000系列GPU和一键设置<br>image 114. WAN 2.1 FusionX是本地视频生成的新最佳选择,仅需8步 + FLUX超分辨率指南<br>image
115. FLUX Kontext Dev详细本地Windows使用教程 - 优于ChatGPT和Gemini图像编辑<br>image 116. MultiTalk完整教程,附带一键安装程序 - 从静态图像制作说话和唱歌视频<br>image

117. MultiTalk升级 - 比之前更好的动画效果配合新工作流 - 图像转视频<br>image 118. SECourses视频和图像超分辨率专业版STAR vs TOPAZ StarLight vs基于图像的最佳超分辨率工具<br>image
119. Wan 2.2和FLUX Krea完整教程 - 自动安装 - 完美预设就绪 - 配合ComfyUI的SwarmUI<br>image 120. Qwen Image主导文本转图像:700+测试揭示为何它优于FLUX - 已发布预设<br>image

121. Wan 2.2、FLUX和Qwen Image升级:开源SOTA图像和视频生成模型终极教程<br>image 122. Qwen Image Edit完整教程:26个不同演示案例、提示词和图像,完胜FLUX Kontext Dev<br>image
123. Nano Banana(Gemini 2.5 Flash Image)完整教程 - 27个独特案例vs Qwen Image Edit - 免费使用<br>image

MonsterMMORPG/Wan_GGUF

作者 MonsterMMORPG

↓ 5.2K ♥ 32

创建时间: 2025-04-29 14:03:11+00:00

更新时间: 2026-03-17 22:39:15+00:00

在 Hugging Face 上查看

文件 (920)

.gitattributes
Bitirme_Projesi_Oneri_Formu_Ornek.docx
CUDA_11_8.zip
FLUX-SRPO-GGUF_Q4_K.gguf
FLUX-SRPO-GGUF_Q5_K.gguf
FLUX-SRPO-GGUF_Q6_K.gguf
FLUX-SRPO-GGUF_Q8_0.gguf
FLUX-SRPO-Mixed-NVFP4.safetensors
FLUX-SRPO-NVFP4.safetensors
FLUX-SRPO-bf16.safetensors
FLUX_2_Dev_NVFP4.safetensors
FLUX_2_Dev_Q4_K_M.swarm.json
FLUX_2_Models/FLUX2-Klein-Base-4B.safetensors
FLUX_2_Models/FLUX2-Klein-Base-9B.safetensors
FLUX_2_Models/FLUX_2_Dev_BF16.safetensors
FLUX_2_Models/FLUX_2_Dev_FP8_Mixed_Scaled.safetensors
FLUX_2_Models/FLUX_2_Klein_Train_VAE.safetensors
FLUX_2_Models/Mistral3_FLUX2_BF16.safetensors
FLUX_2_Models/Z_Image_Train_VAE.safetensors
FLUX_2_Models/flux2-vae.safetensors
FLUX_2_Models/mistral_3_small_flux2_bf16.safetensors
FLUX_2_Models/mistral_3_small_flux2_fp8.safetensors
FLUX_2_Models/qwen_3_4b.safetensors
FLUX_2_Models/qwen_3_8b.safetensors
FLUX_2_Models/zimage_turbo_training_adapter_v2.safetensors
FLUX_Dev_Kontext_NVFP4.safetensors
FLUX_Dev_NVFP4.safetensors
FLUX_Dev_Quant_FP8_Scaled.safetensors
FLUX_Kontext_Dev.safetensors
FLUX_Kontext_Dev_Quant_FP8_Scaled.safetensors
FlashVSR_VAEs/Wan2.1_VAE.pth
FlashVSR_VAEs/Wan2.2_VAE.pth
FlashVSR_VAEs/lighttaehy1_5.pth
FlashVSR_VAEs/lightvaew2_1.pth
FlashVSR_VAEs/taew2_2.safetensors
Flux_2-Turbo-LoRA.safetensors
Flux_2_Turbo_LoRA_Fixed.safetensors
Glyph-SDXL-v2/.mdl
Glyph-SDXL-v2/.msc
Glyph-SDXL-v2/.mv
Glyph-SDXL-v2/README.md
Glyph-SDXL-v2/assets/Arial.ttf
Glyph-SDXL-v2/assets/chinese_char.txt
Glyph-SDXL-v2/assets/color_idx.json
Glyph-SDXL-v2/assets/font_idx_512.json
Glyph-SDXL-v2/assets/multi_fonts/cn.json
Glyph-SDXL-v2/assets/multi_fonts/de.json
Glyph-SDXL-v2/assets/multi_fonts/en.json
Glyph-SDXL-v2/assets/multi_fonts/es.json
Glyph-SDXL-v2/assets/multi_fonts/fr.json
Glyph-SDXL-v2/assets/multi_fonts/it.json
Glyph-SDXL-v2/assets/multi_fonts/jp.json
Glyph-SDXL-v2/assets/multi_fonts/kr.json
Glyph-SDXL-v2/assets/multi_fonts/pt.json
Glyph-SDXL-v2/assets/multi_fonts/ru.json
Glyph-SDXL-v2/assets/multilingual_10-lang_idx.json
Glyph-SDXL-v2/assets/teaser/teaser_multilingual_1.webp
Glyph-SDXL-v2/assets/teaser/teaser_multilingual_2.webp
Glyph-SDXL-v2/assets/teaser/teaser_multilingual_3.webp
Glyph-SDXL-v2/assets/teaser/teaser_multilingual_4.webp
Glyph-SDXL-v2/checkpoints/byt5_mapper.pt
Glyph-SDXL-v2/checkpoints/byt5_model.pt
Glyph-SDXL-v2/checkpoints/unet_inserted_attn.pt
Glyph-SDXL-v2/checkpoints/unet_lora.pt
Glyph-SDXL-v2/configs/glyph_sdxl_multilingual_albedo.py
Glyph-SDXL-v2/configuration.json
Glyph-SDXL-v2/examples/xiaoman.json
Glyph-SDXL-v2/glyph_sdxl/custom_diffusers/__init__.py
Glyph-SDXL-v2/glyph_sdxl/custom_diffusers/models/__init__.py
Glyph-SDXL-v2/glyph_sdxl/custom_diffusers/models/cross_attn_insert_transformer_blocks.py
Glyph-SDXL-v2/glyph_sdxl/custom_diffusers/pipelines/__init__.py
Glyph-SDXL-v2/glyph_sdxl/custom_diffusers/pipelines/pipeline_stable_diffusion_glyph_xl.py
Glyph-SDXL-v2/glyph_sdxl/modules/__init__.py
Glyph-SDXL-v2/glyph_sdxl/modules/byt5_block_byt5_mapper.py
Glyph-SDXL-v2/glyph_sdxl/modules/simple_byt5_mapper.py
Glyph-SDXL-v2/glyph_sdxl/utils/__init__.py
Glyph-SDXL-v2/glyph_sdxl/utils/constants.py
Glyph-SDXL-v2/glyph_sdxl/utils/format_prompt.py
Glyph-SDXL-v2/glyph_sdxl/utils/load_pretrained_byt5.py
Glyph-SDXL-v2/glyph_sdxl/utils/parse_config.py
Glyph-SDXL-v2/inference_multilingual.py
Glyph-SDXL-v2/requirements.txt
HunyuanImage-2.1_vae/vae_2_1/config.json
HunyuanImage-2.1_vae/vae_2_1/pytorch_model.ckpt
HunyuanImage-2.1_vae/vae_refiner/config.json
HunyuanImage-2.1_vae/vae_refiner/pytorch_model.pt
IDM-VTON/ckpt/densepose/model_final_162be9.pkl
IDM-VTON/ckpt/humanparsing/parsing_atr.onnx ONNX
IDM-VTON/ckpt/humanparsing/parsing_lip.onnx ONNX
IDM-VTON/ckpt/openpose/.DS_Store
IDM-VTON/ckpt/openpose/ckpts/body_pose_model.pth
Index_TTS2/.gitattributes
Index_TTS2/README.md
Index_TTS2/bpe.model
Index_TTS2/config.yaml
Index_TTS2/feat1.pt
Index_TTS2/feat2.pt
Index_TTS2/gpt.pth
Index_TTS2/qwen0.6bemo4-merge/Modelfile
Index_TTS2/qwen0.6bemo4-merge/added_tokens.json
Index_TTS2/qwen0.6bemo4-merge/chat_template.jinja
Index_TTS2/qwen0.6bemo4-merge/config.json
Index_TTS2/qwen0.6bemo4-merge/generation_config.json
Index_TTS2/qwen0.6bemo4-merge/merges.txt
Index_TTS2/qwen0.6bemo4-merge/model.safetensors
Index_TTS2/qwen0.6bemo4-merge/special_tokens_map.json
Index_TTS2/qwen0.6bemo4-merge/tokenizer.json
Index_TTS2/qwen0.6bemo4-merge/tokenizer_config.json
Index_TTS2/qwen0.6bemo4-merge/vocab.json
Index_TTS2/s2mel.pth
Index_TTS2/wav2vec2bert_stats.pt
LTX2.3-Distilled-FP8-Quant-Scaled.safetensors
LTX2_audio_vae.safetensors
LTX2_video_vae.safetensors
Levo_Song_Generation/SongGeneration-Runtime/.gitattributes
Levo_Song_Generation/SongGeneration-Runtime/README.md
Levo_Song_Generation/SongGeneration-Runtime/ckpt/encode-s12k.pt
Levo_Song_Generation/SongGeneration-Runtime/ckpt/model_1rvq/model_2_fixed.safetensors
Levo_Song_Generation/SongGeneration-Runtime/ckpt/model_septoken/model_2.safetensors
Levo_Song_Generation/SongGeneration-Runtime/ckpt/models--lengyue233--content-vec-best/.no_exist/c0b9ba13db21beaa4053faae94c102ebe326fd68/model.safetensors
Levo_Song_Generation/SongGeneration-Runtime/ckpt/models--lengyue233--content-vec-best/.no_exist/c0b9ba13db21beaa4053faae94c102ebe326fd68/model.safetensors.index.json
Levo_Song_Generation/SongGeneration-Runtime/ckpt/models--lengyue233--content-vec-best/blobs/5186a71b15933aca2d9942db95e1aff02642d1f0
Levo_Song_Generation/SongGeneration-Runtime/ckpt/models--lengyue233--content-vec-best/blobs/d8dd400e054ddf4e6be75dab5a2549db748cc99e756a097c496c099f65a4854e
Levo_Song_Generation/SongGeneration-Runtime/ckpt/models--lengyue233--content-vec-best/refs/main
Levo_Song_Generation/SongGeneration-Runtime/ckpt/models--lengyue233--content-vec-best/snapshots/c0b9ba13db21beaa4053faae94c102ebe326fd68/config.json
Levo_Song_Generation/SongGeneration-Runtime/ckpt/models--lengyue233--content-vec-best/snapshots/c0b9ba13db21beaa4053faae94c102ebe326fd68/pytorch_model.bin
Levo_Song_Generation/SongGeneration-Runtime/ckpt/vae/autoencoder_music_1320k.ckpt
Levo_Song_Generation/SongGeneration-Runtime/ckpt/vae/stable_audio_1920_vae.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/Qwen2-7B/LICENSE
Levo_Song_Generation/SongGeneration-Runtime/third_party/Qwen2-7B/README.md
Levo_Song_Generation/SongGeneration-Runtime/third_party/Qwen2-7B/config.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/Qwen2-7B/generation_config.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/Qwen2-7B/merges.txt
Levo_Song_Generation/SongGeneration-Runtime/third_party/Qwen2-7B/tokenizer.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/Qwen2-7B/tokenizer_config.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/Qwen2-7B/vocab.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/__init__.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/ckpt/htdemucs.pth
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/ckpt/htdemucs.yaml
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/models/__init__.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/models/apply.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/models/audio.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/models/demucs.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/models/htdemucs.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/models/pretrained.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/models/spec.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/models/states.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/models/transformer.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/models/utils.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/demucs/run.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/hub/version.txt
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/.gitignore
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/LICENSE
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/LICENSES/LICENSE_ADP.txt
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/LICENSES/LICENSE_AURALOSS.txt
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/LICENSES/LICENSE_DESCRIPT.txt
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/LICENSES/LICENSE_META.txt
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/LICENSES/LICENSE_NVIDIA.txt
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/LICENSES/LICENSE_XTRANSFORMERS.txt
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/README.md
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/config/model_1920.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/config/model_config.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/defaults.ini
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/docs/autoencoders.md
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/docs/conditioning.md
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/docs/datasets.md
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/docs/diffusion.md
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/docs/pretransforms.md
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/pyproject.toml
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/run_gradio.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/scripts/ds_zero_to_pl_ckpt.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/setup.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/__init__.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/dataset_configs/custom_metadata/custom_md_example.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/dataset_configs/local_training_example.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/dataset_configs/s3_wds_example.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/model_configs/autoencoders/dac_2048_32_vae.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/model_configs/autoencoders/encodec_musicgen_rvq.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/model_configs/autoencoders/stable_audio_1_0_vae.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/model_configs/autoencoders/stable_audio_2_0_vae.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/model_configs/autoencoders/stable_audio_vae_1920.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/model_configs/dance_diffusion/dance_diffusion_base.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/model_configs/dance_diffusion/dance_diffusion_base_16k.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/model_configs/dance_diffusion/dance_diffusion_base_44k.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/model_configs/dance_diffusion/dance_diffusion_large.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/model_configs/txt2audio/stable_audio_1_0.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/configs/model_configs/txt2audio/stable_audio_2_0.json
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/data/__init__.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/data/dataset.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/data/utils.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/inference/__init__.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/inference/generation.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/inference/sampling.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/inference/utils.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/interface/__init__.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/interface/gradio.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/__init__.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/adp.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/autoencoders.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/blocks.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/bottleneck.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/codebook_patterns.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/conditioners.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/diffusion.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/diffusion_prior.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/discriminators.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/dit.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/factory.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/lm.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/lm_backbone.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/local_attention.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/pqmf.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/pretrained.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/pretransforms.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/transformer.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/utils.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/models/wavelets.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/training/__init__.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/training/autoencoders.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/training/diffusion.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/training/factory.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/training/lm.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/training/losses/__init__.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/training/losses/auraloss.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/training/losses/losses.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/stable_audio_tools/training/utils.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/train.py
Levo_Song_Generation/SongGeneration-Runtime/third_party/stable_audio_tools/unwrap_model.py
Levo_Song_Generation/SongGeneration-base-full/.gitattributes
Levo_Song_Generation/SongGeneration-base-full/README.md
Levo_Song_Generation/SongGeneration-base-full/config.yaml
Levo_Song_Generation/SongGeneration-base-full/model.pt
Levo_Song_Generation/SongGeneration-large/.gitattributes
Levo_Song_Generation/SongGeneration-large/README.md
Levo_Song_Generation/SongGeneration-large/config.yaml
Levo_Song_Generation/SongGeneration-large/model.pt
MelBandRoformer_fp32.safetensors
Ovi_Premium/MMAudio/ext_weights/best_netG.pt
Ovi_Premium/MMAudio/ext_weights/v1-16.pth
Ovi_Premium/Ovi/model.safetensors
Ovi_Premium/Ovi/model_fp8_scaled.safetensors
Ovi_Premium/Wan2.2-TI2V-5B/Wan2.2_VAE.pth
Ovi_Premium/Wan2.2-TI2V-5B/google/umt5-xxl/special_tokens_map.json
Ovi_Premium/Wan2.2-TI2V-5B/google/umt5-xxl/spiece.model
Ovi_Premium/Wan2.2-TI2V-5B/google/umt5-xxl/tokenizer.json
Ovi_Premium/Wan2.2-TI2V-5B/google/umt5-xxl/tokenizer_config.json
Ovi_Premium/Wan2.2-TI2V-5B/models_t5_umt5-xxl-enc-bf16.pth
Ovi_Premium/Wan2.2-TI2V-5B/models_t5_umt5-xxl-enc-fp8_scaled.safetensors
Phantom_Wan_14B_FusionX_LoRA.safetensors
Qwen-Image-Edit-2509-Fusion-LoRA.safetensors
Qwen-Image-Edit-2509-Lightning-4steps-V1.0-fp32.safetensors
Qwen-Image-Edit-2509-Lightning-8steps-V1.0-fp32.safetensors
Qwen-Image-Edit-2509-Multiple-Angles-LoRA.safetensors
Qwen-Image-Edit-2509-Relight-LoRA.safetensors
Qwen-Image-Edit-Lightning-8steps-V1.0.safetensors
Qwen-Image-Edit-Plus-2509-Q3_K_M.gguf
Qwen-Image-Edit-Plus-2509-Q4_1.gguf
Qwen-Image-Edit-Plus-2509-Q5_1.gguf
Qwen-Image-Edit-Plus-2509-Q6_K.gguf
Qwen-Image-Edit-Plus-2509-Q8_0.gguf
Qwen-Image-Edit-Plus-2509-Q8_0.swarm.json
Qwen-Image-FP8-Lightning-4steps-V1.0-fp32.safetensors
Qwen-Image-Lightning-8steps-V1.1.safetensors
Qwen-Image-Lightning-8steps-V2.0.safetensors
Qwen2.5-VL-7B-Instruct/.gitattributes
Qwen2.5-VL-7B-Instruct/README.md
Qwen2.5-VL-7B-Instruct/chat_template.json
Qwen2.5-VL-7B-Instruct/config.json
Qwen2.5-VL-7B-Instruct/generation_config.json
Qwen2.5-VL-7B-Instruct/merges.txt
Qwen2.5-VL-7B-Instruct/model-00001-of-00005.safetensors
Qwen2.5-VL-7B-Instruct/model-00002-of-00005.safetensors
Qwen2.5-VL-7B-Instruct/model-00003-of-00005.safetensors
Qwen2.5-VL-7B-Instruct/model-00004-of-00005.safetensors
Qwen2.5-VL-7B-Instruct/model-00005-of-00005.safetensors
Qwen2.5-VL-7B-Instruct/model.safetensors.index.json
Qwen2.5-VL-7B-Instruct/preprocessor_config.json
Qwen2.5-VL-7B-Instruct/tokenizer.json
Qwen2.5-VL-7B-Instruct/tokenizer_config.json
Qwen2.5-VL-7B-Instruct/vocab.json
Qwen_Image_2512_BF16.safetensors
Qwen_Image_2512_Quant_FP8_Scaled.safetensors
Qwen_Image_Edit_2511_BF16.safetensors
Qwen_Image_Edit_2511_Quant_Scaled_FP8.safetensors
Qwen_Image_Edit_BF16.safetensors
Qwen_Image_Edit_FP8_e4m3fn.safetensors
Qwen_Image_Edit_GGUF_Q4_K_M.gguf
Qwen_Image_Edit_GGUF_Q5_1.gguf
Qwen_Image_Edit_GGUF_Q6_K.gguf
Qwen_Image_Edit_GGUF_Q8_0.gguf
Qwen_Image_Edit_Plus_2509_FP8_Scaled.safetensors
Qwen_Image_Edit_Plus_2509_bf16.safetensors
Qwen_Image_Edit_Plus_2509_fp8_e4m3fn.safetensors
Qwen_Image_FP8_Scaled.safetensors
Qwen_LoRA_Amateur_Photo_v1.safetensors
Qwen_LoRA_Kook_V2_Cinematic.safetensors
Qwen_LoRA_Skin_Fix_v2.safetensors
README.md
RIFE_Models/4.14/.DS_Store
RIFE_Models/4.14/IFNet_HDv3.py
RIFE_Models/4.14/RIFE_HDv3.py
RIFE_Models/4.14/flownet.pkl
RIFE_Models/4.14/refine.py
RIFE_Models/4.15/.DS_Store
RIFE_Models/4.15/IFNet_HDv3.py
RIFE_Models/4.15/RIFE_HDv3.py
RIFE_Models/4.15/flownet.pkl
RIFE_Models/4.15/refine.py
RIFE_Models/4.17/.DS_Store
RIFE_Models/4.17/IFNet_HDv3.py
RIFE_Models/4.17/RIFE_HDv3.py
RIFE_Models/4.17/flownet.pkl
RIFE_Models/4.17/refine.py
RIFE_Models/4.18/.DS_Store
RIFE_Models/4.18/IFNet_HDv3.py
RIFE_Models/4.18/RIFE_HDv3.py
RIFE_Models/4.18/flownet.pkl
RIFE_Models/4.18/refine.py
RIFE_Models/4.20/.DS_Store
RIFE_Models/4.20/IFNet_HDv3.py
RIFE_Models/4.20/RIFE_HDv3.py
RIFE_Models/4.20/flownet.pkl
RIFE_Models/4.20/refine.py
RIFE_Models/4.21/.DS_Store
RIFE_Models/4.21/IFNet_HDv3.py
RIFE_Models/4.21/RIFE_HDv3.py
RIFE_Models/4.21/flownet.pkl
RIFE_Models/4.21/refine.py
RIFE_Models/4.22/.DS_Store
RIFE_Models/4.22/IFNet_HDv3.py
RIFE_Models/4.22/RIFE_HDv3.py
RIFE_Models/4.22/flownet.pkl
RIFE_Models/4.22/refine.py
RIFE_Models/4.25/.DS_Store
RIFE_Models/4.25/IFNet_HDv3.py
RIFE_Models/4.25/RIFE_HDv3.py
RIFE_Models/4.25/__pycache__/IFNet_HDv3.cpython-311.pyc
RIFE_Models/4.25/__pycache__/RIFE_HDv3.cpython-311.pyc
RIFE_Models/4.25/flownet.pkl
RIFE_Models/4.25/refine.py
RIFE_Models/4.26/.DS_Store
RIFE_Models/4.26/IFNet_HDv3.py
RIFE_Models/4.26/RIFE_HDv3.py
RIFE_Models/4.26/__pycache__/IFNet_HDv3.cpython-311.pyc
RIFE_Models/4.26/__pycache__/RIFE_HDv3.cpython-311.pyc
RIFE_Models/4.26/flownet.pkl
RIFE_Models/4.26/refine.py
RMBG2/.gitattributes
RMBG2/BiRefNet_config.py
RMBG2/README.md
RMBG2/birefnet.py
RMBG2/collage5.png
RMBG2/config.json
RMBG2/diagram1.png
RMBG2/model.safetensors
RMBG2/onnx/model.onnx ONNX
RMBG2/onnx/model_bnb4.onnx ONNX
RMBG2/onnx/model_fp16.onnx ONNX
RMBG2/onnx/model_int8.onnx ONNX
RMBG2/onnx/model_q4.onnx ONNX
RMBG2/onnx/model_q4f16.onnx ONNX
RMBG2/onnx/model_quantized.onnx ONNX
RMBG2/onnx/model_uint8.onnx ONNX
RMBG2/preprocessor_config.json
RMBG2/pytorch_model.bin
RMBG2/t4.png
RVC_Demo_Voices/21_Savage_Demo.index
RVC_Demo_Voices/21_Savage_Demo.pth
RVC_Demo_Voices/2Pac_Tupac_Demo.index
RVC_Demo_Voices/2Pac_Tupac_Demo.pth
RVC_Demo_Voices/6lack_Demo.index
RVC_Demo_Voices/6lack_Demo.pth
RVC_Demo_Voices/ASAP_Rocky_Demo.index
RVC_Demo_Voices/ASAP_Rocky_Demo.pth
RVC_Demo_Voices/Anderson_PAAK_Demo.index
RVC_Demo_Voices/Anderson_PAAK_Demo.pth
RVC_Demo_Voices/Arijit_Singh_Demo.index
RVC_Demo_Voices/Arijit_Singh_Demo.pth
RVC_Demo_Voices/Bad_Bunny_Demo.index
RVC_Demo_Voices/Bad_Bunny_Demo.pth
RVC_Demo_Voices/Barack_Obama_Demo.index
RVC_Demo_Voices/Barack_Obama_Demo.pth
RVC_Demo_Voices/Biggie_Smalls_Demo.index
RVC_Demo_Voices/Biggie_Smalls_Demo.pth
RVC_Demo_Voices/Bob_Marley_Demo.index
RVC_Demo_Voices/Bob_Marley_Demo.pth
RVC_Demo_Voices/Brent_Faiyaz_Demo.index
RVC_Demo_Voices/Brent_Faiyaz_Demo.pth
RVC_Demo_Voices/Bryson_Tiller_Demo.index
RVC_Demo_Voices/Bryson_Tiller_Demo.pth
RVC_Demo_Voices/BurnaBoy_Demo.index
RVC_Demo_Voices/BurnaBoy_Demo.pth
RVC_Demo_Voices/Central_Cee_Demo.index
RVC_Demo_Voices/Central_Cee_Demo.pth
RVC_Demo_Voices/Chester_Bennington_Demo.index
RVC_Demo_Voices/Chester_Bennington_Demo.pth
RVC_Demo_Voices/Childish_Gambino_Demo.index
RVC_Demo_Voices/Childish_Gambino_Demo.pth
RVC_Demo_Voices/Chris_Brown_Demo.index
RVC_Demo_Voices/Chris_Brown_Demo.pth
RVC_Demo_Voices/Chris_Martin_Demo.index
RVC_Demo_Voices/Chris_Martin_Demo.pth
RVC_Demo_Voices/Daddy_Yankee_Demo.index
RVC_Demo_Voices/Daddy_Yankee_Demo.pth
RVC_Demo_Voices/Donald_Trump_Demo.index
RVC_Demo_Voices/Donald_Trump_Demo.pth
RVC_Demo_Voices/Donald_Trump_v2_Demo.index
RVC_Demo_Voices/Donald_Trump_v2_Demo.pth
RVC_Demo_Voices/Drake_Demo.index
RVC_Demo_Voices/Drake_Demo.pth
RVC_Demo_Voices/Ed_Sheeran_Demo.index
RVC_Demo_Voices/Ed_Sheeran_Demo.pth
RVC_Demo_Voices/Eminem_Demo.index
RVC_Demo_Voices/Eminem_Demo.pth
RVC_Demo_Voices/Frank_Sinatra_Demo.index
RVC_Demo_Voices/Frank_Sinatra_Demo.pth
RVC_Demo_Voices/Joe_Biden_Demo.index
RVC_Demo_Voices/Joe_Biden_Demo.pth
RVC_Demo_Voices/Joe_Biden_v2_Demo.index
RVC_Demo_Voices/Joe_Biden_v2_Demo.pth
RVC_Demo_Voices/Justin_Bieber_Demo.index
RVC_Demo_Voices/Justin_Bieber_Demo.pth
RVC_Demo_Voices/Kanye_West_Demo.index
RVC_Demo_Voices/Kanye_West_Demo.pth
RVC_Demo_Voices/Lil_Wayne_Demo.index
RVC_Demo_Voices/Lil_Wayne_Demo.pth
RVC_Demo_Voices/Michael_Jackson_Demo.index
RVC_Demo_Voices/Michael_Jackson_Demo.pth
RVC_Demo_Voices/Playboi_Carti_Demo.index
RVC_Demo_Voices/Playboi_Carti_Demo.pth
RVC_Demo_Voices/Rauw_Alejandro_Demo.index
RVC_Demo_Voices/Rauw_Alejandro_Demo.pth
RVC_Demo_Voices/Snoop_Dogg_Demo.index
RVC_Demo_Voices/Snoop_Dogg_Demo.pth
RVC_Demo_Voices/The_Weeknd_Demo.index
RVC_Demo_Voices/The_Weeknd_Demo.pth
RVC_Demo_Voices/Travis_Scott_Demo.index
RVC_Demo_Voices/Travis_Scott_Demo.pth
RVC_Demo_Voices/Usher_Demo.index
RVC_Demo_Voices/Usher_Demo.pth
SAM_2-1.0-cp310-cp310-linux_x86_64.whl
SAM_2-1.0-cp310-cp310-win_amd64.whl
SDXL_CyberRealistic_v8_FP32.safetensors
SDXL_RealVis_v5_FP32.safetensors
Speaker_Diarization_3_1/.gitattributes
Speaker_Diarization_3_1/.github/workflows/sync_to_hub.yaml
Speaker_Diarization_3_1/README.md
Speaker_Diarization_3_1/config.yaml
Speaker_Diarization_3_1/handler.py
Speaker_Diarization_3_1/reproducible_research/AISHELL.SpeakerDiarization.Benchmark.test.eval
Speaker_Diarization_3_1/reproducible_research/AISHELL.SpeakerDiarization.Benchmark.test.rttm
Speaker_Diarization_3_1/reproducible_research/AMI-SDM.SpeakerDiarization.Benchmark.test.eval
Speaker_Diarization_3_1/reproducible_research/AMI-SDM.SpeakerDiarization.Benchmark.test.rttm
Speaker_Diarization_3_1/reproducible_research/AMI.SpeakerDiarization.Benchmark.test.eval
Speaker_Diarization_3_1/reproducible_research/AMI.SpeakerDiarization.Benchmark.test.rttm
Speaker_Diarization_3_1/reproducible_research/AVA-AVD.SpeakerDiarization.Benchmark.test.eval
Speaker_Diarization_3_1/reproducible_research/AVA-AVD.SpeakerDiarization.Benchmark.test.rttm
Speaker_Diarization_3_1/reproducible_research/AliMeeting.SpeakerDiarization.Benchmark.test.eval
Speaker_Diarization_3_1/reproducible_research/AliMeeting.SpeakerDiarization.Benchmark.test.rttm
Speaker_Diarization_3_1/reproducible_research/DIHARD.SpeakerDiarization.Benchmark.test.eval
Speaker_Diarization_3_1/reproducible_research/DIHARD.SpeakerDiarization.Benchmark.test.rttm
Speaker_Diarization_3_1/reproducible_research/MSDWILD.SpeakerDiarization.Benchmark.test.eval
Speaker_Diarization_3_1/reproducible_research/MSDWILD.SpeakerDiarization.Benchmark.test.rttm
Speaker_Diarization_3_1/reproducible_research/REPERE.SpeakerDiarization.Benchmark.test.eval
Speaker_Diarization_3_1/reproducible_research/REPERE.SpeakerDiarization.Benchmark.test.rttm
Speaker_Diarization_3_1/reproducible_research/VoxConverse.SpeakerDiarization.Benchmark.test.eval
Speaker_Diarization_3_1/reproducible_research/VoxConverse.SpeakerDiarization.Benchmark.test.rttm
Speaker_Diarization_3_1/requirements.txt
TensorRT-10.13.2.6.Windows.win10.cuda-12.9.zip
Trellv2/ZhengPeng7--BiRefNet/.gitattributes
Trellv2/ZhengPeng7--BiRefNet/.gitignore
Trellv2/ZhengPeng7--BiRefNet/BiRefNet_config.py
Trellv2/ZhengPeng7--BiRefNet/README.md
Trellv2/ZhengPeng7--BiRefNet/birefnet.py
Trellv2/ZhengPeng7--BiRefNet/config.json
Trellv2/ZhengPeng7--BiRefNet/handler.py
Trellv2/ZhengPeng7--BiRefNet/model.safetensors
Trellv2/ZhengPeng7--BiRefNet/requirements.txt
Trellv2/briaai--RMBG-2.0/.gitattributes
Trellv2/briaai--RMBG-2.0/BiRefNet_config.py
Trellv2/briaai--RMBG-2.0/README.md
Trellv2/briaai--RMBG-2.0/birefnet.py
Trellv2/briaai--RMBG-2.0/collage5.png
Trellv2/briaai--RMBG-2.0/config.json
Trellv2/briaai--RMBG-2.0/diagram1.png
Trellv2/briaai--RMBG-2.0/model.safetensors
Trellv2/briaai--RMBG-2.0/onnx/model.onnx ONNX
Trellv2/briaai--RMBG-2.0/onnx/model_bnb4.onnx ONNX
Trellv2/briaai--RMBG-2.0/onnx/model_fp16.onnx ONNX
Trellv2/briaai--RMBG-2.0/onnx/model_int8.onnx ONNX
Trellv2/briaai--RMBG-2.0/onnx/model_q4.onnx ONNX
Trellv2/briaai--RMBG-2.0/onnx/model_q4f16.onnx ONNX
Trellv2/briaai--RMBG-2.0/onnx/model_quantized.onnx ONNX
Trellv2/briaai--RMBG-2.0/onnx/model_uint8.onnx ONNX
Trellv2/briaai--RMBG-2.0/preprocessor_config.json
Trellv2/briaai--RMBG-2.0/pytorch_model.bin
Trellv2/briaai--RMBG-2.0/t4.png
Trellv2/facebook--dinov3-vitl16-pretrain-lvd1689m/.gitattributes
Trellv2/facebook--dinov3-vitl16-pretrain-lvd1689m/LICENSE.md
Trellv2/facebook--dinov3-vitl16-pretrain-lvd1689m/README.md
Trellv2/facebook--dinov3-vitl16-pretrain-lvd1689m/config.json
Trellv2/facebook--dinov3-vitl16-pretrain-lvd1689m/model.safetensors
Trellv2/facebook--dinov3-vitl16-pretrain-lvd1689m/preprocessor_config.json
Trellv2/microsoft--TRELLIS.2-4B/.gitattributes
Trellv2/microsoft--TRELLIS.2-4B/README.md
Trellv2/microsoft--TRELLIS.2-4B/ckpts/shape_dec_next_dc_f16c32_fp16.json
Trellv2/microsoft--TRELLIS.2-4B/ckpts/shape_dec_next_dc_f16c32_fp16.safetensors
Trellv2/microsoft--TRELLIS.2-4B/ckpts/shape_enc_next_dc_f16c32_fp16.json
Trellv2/microsoft--TRELLIS.2-4B/ckpts/shape_enc_next_dc_f16c32_fp16.safetensors
Trellv2/microsoft--TRELLIS.2-4B/ckpts/slat_flow_img2shape_dit_1_3B_1024_bf16.json
Trellv2/microsoft--TRELLIS.2-4B/ckpts/slat_flow_img2shape_dit_1_3B_1024_bf16.safetensors
Trellv2/microsoft--TRELLIS.2-4B/ckpts/slat_flow_img2shape_dit_1_3B_512_bf16.json
Trellv2/microsoft--TRELLIS.2-4B/ckpts/slat_flow_img2shape_dit_1_3B_512_bf16.safetensors
Trellv2/microsoft--TRELLIS.2-4B/ckpts/slat_flow_imgshape2tex_dit_1_3B_1024_bf16.json
Trellv2/microsoft--TRELLIS.2-4B/ckpts/slat_flow_imgshape2tex_dit_1_3B_1024_bf16.safetensors
Trellv2/microsoft--TRELLIS.2-4B/ckpts/slat_flow_imgshape2tex_dit_1_3B_512_bf16.json
Trellv2/microsoft--TRELLIS.2-4B/ckpts/slat_flow_imgshape2tex_dit_1_3B_512_bf16.safetensors
Trellv2/microsoft--TRELLIS.2-4B/ckpts/ss_flow_img_dit_1_3B_64_bf16.json
Trellv2/microsoft--TRELLIS.2-4B/ckpts/ss_flow_img_dit_1_3B_64_bf16.safetensors
Trellv2/microsoft--TRELLIS.2-4B/ckpts/tex_dec_next_dc_f16c32_fp16.json
Trellv2/microsoft--TRELLIS.2-4B/ckpts/tex_dec_next_dc_f16c32_fp16.safetensors
Trellv2/microsoft--TRELLIS.2-4B/ckpts/tex_enc_next_dc_f16c32_fp16.json
Trellv2/microsoft--TRELLIS.2-4B/ckpts/tex_enc_next_dc_f16c32_fp16.safetensors
Trellv2/microsoft--TRELLIS.2-4B/pipeline.json
Trellv2/microsoft--TRELLIS.2-4B/texturing_pipeline.json
UltraShape_For_Trellis2/.gitattributes
UltraShape_For_Trellis2/README.md
UltraShape_For_Trellis2/ultrashape_v1.pt
UltraShape_For_Trellis2/vae/vae_step=15000.ckpt
VibeVoice_4bit/QUANTIZATION_README.md
VibeVoice_4bit/README.md
VibeVoice_4bit/config.json
VibeVoice_4bit/generation_config.json
VibeVoice_4bit/load_quantized_4bit.py
VibeVoice_4bit/minimal_memory_output.wav
VibeVoice_4bit/model-00001-of-00002.safetensors
VibeVoice_4bit/model-00002-of-00002.safetensors
VibeVoice_4bit/model.safetensors.index.json
VibeVoice_4bit/preprocessor_config.json
VibeVoice_4bit/quantization_config.json
VibeVoice_4bit/quantize_and_save_vibevoice.py
VibeVoice_4bit/test_accurate_vram.py
VibeVoice_4bit/use_quantized_model.py
VibeVoice_4bit/vibevoice_7gb_target.py
VibeVoice_BF16/.gitattributes
VibeVoice_BF16/README.md
VibeVoice_BF16/config.json
VibeVoice_BF16/configuration.json
VibeVoice_BF16/figures/Fig1.png
VibeVoice_BF16/model-00001-of-00010.safetensors
VibeVoice_BF16/model-00002-of-00010.safetensors
VibeVoice_BF16/model-00003-of-00010.safetensors
VibeVoice_BF16/model-00004-of-00010.safetensors
VibeVoice_BF16/model-00005-of-00010.safetensors
VibeVoice_BF16/model-00006-of-00010.safetensors
VibeVoice_BF16/model-00007-of-00010.safetensors
VibeVoice_BF16/model-00008-of-00010.safetensors
VibeVoice_BF16/model-00009-of-00010.safetensors
VibeVoice_BF16/model-00010-of-00010.safetensors
VibeVoice_BF16/model.safetensors.index.json
VibeVoice_BF16/preprocessor_config.json
Viso_Master_Models/1k3d68.onnx ONNX
Viso_Master_Models/2d106det.onnx ONNX
Viso_Master_Models/2dfan4.onnx ONNX
Viso_Master_Models/4x-UltraMix_Smooth.fp16.onnx ONNX
Viso_Master_Models/4x-UltraSharp.fp16.onnx ONNX
Viso_Master_Models/BSRGANx2.fp16.onnx ONNX
Viso_Master_Models/BSRGANx4.fp16.onnx ONNX
Viso_Master_Models/ColorizeArtistic.fp16.onnx ONNX
Viso_Master_Models/ColorizeStable.fp16.onnx ONNX
Viso_Master_Models/ColorizeVideo.fp16.onnx ONNX
Viso_Master_Models/GFPGANv1.4.onnx ONNX
Viso_Master_Models/GPEN-BFR-1024.onnx ONNX
Viso_Master_Models/GPEN-BFR-2048.onnx ONNX
Viso_Master_Models/GPEN-BFR-256.onnx ONNX
Viso_Master_Models/GPEN-BFR-512.onnx ONNX
Viso_Master_Models/InStyleSwapper256_Version_A.fp16.onnx ONNX
Viso_Master_Models/InStyleSwapper256_Version_B.fp16.onnx ONNX
Viso_Master_Models/InStyleSwapper256_Version_C.fp16.onnx ONNX
Viso_Master_Models/RealESRGAN_x2plus.fp16.onnx ONNX
Viso_Master_Models/RealESRGAN_x4plus.fp16.onnx ONNX
Viso_Master_Models/RestoreFormerPlusPlus.fp16.onnx ONNX
Viso_Master_Models/VQFRv2.fp16.onnx ONNX
Viso_Master_Models/XSeg_model.onnx ONNX
Viso_Master_Models/canonswap/appearance_feature_extractor.onnx ONNX
Viso_Master_Models/canonswap/id_extractor.onnx ONNX
Viso_Master_Models/canonswap/motion_extractor.onnx ONNX
Viso_Master_Models/canonswap/refine_module.onnx ONNX
Viso_Master_Models/canonswap/spade_generator.onnx ONNX
Viso_Master_Models/canonswap/swap_module.onnx ONNX
Viso_Master_Models/canonswap/warping_decoder.onnx ONNX
Viso_Master_Models/codeformer_fp16.onnx ONNX
Viso_Master_Models/cscs_256.onnx ONNX
Viso_Master_Models/cscs_arcface_model.onnx ONNX
Viso_Master_Models/cscs_id_adapter.onnx ONNX
Viso_Master_Models/ddcolor.onnx ONNX
Viso_Master_Models/ddcolor_artistic.onnx ONNX
Viso_Master_Models/det_10g.onnx ONNX
Viso_Master_Models/dfm_models/.gitkeep
Viso_Master_Models/face_blendshapes_Nx146x2.onnx ONNX
Viso_Master_Models/face_landmarks_detector_Nx3x256x256.onnx ONNX
Viso_Master_Models/faceparser_resnet34.onnx ONNX
Viso_Master_Models/gfpgan-1024.onnx ONNX
Viso_Master_Models/ghost_arcface_backbone.onnx ONNX
Viso_Master_Models/ghost_unet_1_block.onnx ONNX
Viso_Master_Models/ghost_unet_2_block.onnx ONNX
Viso_Master_Models/ghost_unet_3_block.onnx ONNX
Viso_Master_Models/grid_sample_3d_plugin.dll
Viso_Master_Models/inswapper_128.fp16.onnx ONNX
Viso_Master_Models/landmark.onnx ONNX
Viso_Master_Models/libgrid_sample_3d_plugin.so
Viso_Master_Models/liveportrait_onnx/appearance_feature_extractor.onnx ONNX
Viso_Master_Models/liveportrait_onnx/lip_array.pkl
Viso_Master_Models/liveportrait_onnx/motion_extractor.onnx ONNX
Viso_Master_Models/liveportrait_onnx/stitching.onnx ONNX
Viso_Master_Models/liveportrait_onnx/stitching_eye.onnx ONNX
Viso_Master_Models/liveportrait_onnx/stitching_lip.onnx ONNX
Viso_Master_Models/liveportrait_onnx/warping_spade-fix.onnx ONNX
Viso_Master_Models/liveportrait_onnx/warping_spade.onnx ONNX
Viso_Master_Models/meanshape_68.pkl
Viso_Master_Models/occluder.onnx ONNX
Viso_Master_Models/peppapig_teacher_Nx3x256x256.onnx ONNX
Viso_Master_Models/rd64-uni-refined.pth
Viso_Master_Models/realesr-general-x4v3.onnx ONNX
Viso_Master_Models/ref-ldm_embedding/ckpts/refldm.ckpt
Viso_Master_Models/ref-ldm_embedding/ckpts/vqgan.ckpt
Viso_Master_Models/ref-ldm_embedding/configs/ldm.yaml
Viso_Master_Models/ref-ldm_embedding/configs/refldm.yaml
Viso_Master_Models/ref-ldm_embedding/configs/vqgan.yaml
Viso_Master_Models/ref_ldm_unet_external_kv.onnx ONNX
Viso_Master_Models/ref_ldm_vae_decoder.onnx ONNX
Viso_Master_Models/ref_ldm_vae_encoder.onnx ONNX
Viso_Master_Models/reference_kv_data/input_234876039319086603747716700954764771908.pt
Viso_Master_Models/refldm.ckpt
Viso_Master_Models/res50.onnx ONNX
Viso_Master_Models/scrfd_2.5g_bnkps.onnx ONNX
Viso_Master_Models/simswap_512_unoff.onnx ONNX
Viso_Master_Models/simswap_arcface_model.onnx ONNX
Viso_Master_Models/vgg_combo_relu3_3_relu3_1.onnx ONNX
Viso_Master_Models/w600k_r50.onnx ONNX
Viso_Master_Models/yoloface_8n.onnx ONNX
Viso_Master_Models/yunet_n_640_640.onnx ONNX
Wan-2.2-I2V-High-Noise-BF16.safetensors
Wan-2.2-I2V-Low-Noise-BF16.safetensors
Wan-2.2-T2V-High-Noise-BF16.safetensors
Wan-2.2-T2V-Low-Noise-BF16.safetensors
Wan2.1_14b_FusionX_Image_to_Video_GGUF_Q4_K_M.gguf
Wan2.1_14b_FusionX_Image_to_Video_GGUF_Q5_K_M.gguf
Wan2.1_14b_FusionX_Image_to_Video_GGUF_Q6_K.gguf
Wan2.1_14b_FusionX_Image_to_Video_GGUF_Q8.gguf
Wan2.1_Image_to_Video_14B_FusionX_LoRA.safetensors
Wan2.1_Text_to_Video_14B_FusionX_LoRA.safetensors
Wan2.1_VAE.pth
Wan2.2-I2V-A14B-Moe-Distill-Lightx2v-High_fp8_scaled.safetensors
Wan2.2-I2V-A14B-Moe-Distill-Lightx2v-Low_fp8_scaled.safetensors
Wan2.2-T2V-A14B-4steps-lora-250928-High.safetensors
Wan2.2-T2V-A14B-4steps-lora-250928-Low.safetensors
Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V2.0-High.safetensors
Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V2.0-Low.safetensors
Wan2.2_T2V_High_Noise_Lightx2v_4steps_LoRA_1217.safetensors
Wan2.2_T2V_Low_Noise_Lightx2v_4steps_LoRA_1217.safetensors
Wan21_14B_Self_Forcing_LoRA_T2V_I2V.safetensors
Wan21_I2V_14B_lightx2v_cfg_step_distill_lora_rank64_fixed.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank64.safetensors
Wan21_Uni3C_controlnet_fp16.safetensors
Wan2_1_VAE_bf16.safetensors
Wan2_2-I2V-A14B-4steps-lora-rank64-Seko-V1_High.safetensors
Wan2_2-I2V-A14B-4steps-lora-rank64-Seko-V1_Low.safetensors
Wan2_2-T2V-A14B-4steps-lora-rank64-Seko-V1_1_High.safetensors
Wan2_2-T2V-A14B-4steps-lora-rank64-Seko-V1_1_Low.safetensors
WanVideo_2_1_Multitalk_14B_fp32.safetensors
WanVideo_2_1_Multitalk_14B_fp8_e4m3fn.safetensors
Wuli-Qwen-Image-2512-Turbo-LoRA-4steps-V1.safetensors
Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps.safetensors
Z-Image-Turbo-Fun-Controlnet-Union.safetensors
Z-Image-Turbo-Models/Z_Image_Turbo_BF16.safetensors
Z-Image-Turbo-Models/Z_Image_Turbo_FP8_scaled.safetensors
Z-Image-Turbo-Models/qwen_3_4b.safetensors
Z_Image_BF16.safetensors
Z_Image_Quant_FP8_Scaled.safetensors
Z_Image_Training_Text_Encoder.safetensors
Z_Image_Turbo_NVFP4.safetensors
atom3d-0.1.0+torch291_cu13-cp311-cp311-win_amd64.whl
byt5/.gitattributes
byt5/README.md
byt5/config.json
byt5/flax_model.msgpack
byt5/generation_config.json
byt5/pytorch_model.bin
byt5/special_tokens_map.json
byt5/tf_model.h5
byt5/tokenizer_config.json
clip_vision_h.safetensors
cubvh-0.1.2-cp311-cp311-win_amd64.whl
cuda_12.9.1_576.57_windows.exe
cuda_13.1.0_windows.exe
cuda_13.1.1_windows.exe
cudnn_8_9_7_2.zip
cudnn_9.12.0_windows.exe
cudnn_9.12_cuda_12.9.zip
cudnn_9.17.1_windows_x86_64.exe
cudnn_9.19.0_windows_x86_64.exe
cumesh-0.0.1-cp310-cp310-linux_x86_64.whl
cumesh-0.0.1-cp310-cp310-win_amd64.whl
cumesh-0.0.1-cp311-cp311-linux_x86_64.whl
cumesh-0.0.1-cp311-cp311-win_amd64.whl
cumm_cu_129_cuda_version-0.8.2-1torch28-cp311-cp311-win_amd64.whl
cumm_cu_131_cuda_version-0.8.2+torch2.9.1-cp311-cp311-win_amd64.whl
deepspeed-0.16.5-cp310-cp310-win_amd64.whl
deepspeed-0.16.5-cp311-cp311-win_amd64.whl
deepspeed-0.16.5-cp312-cp312-win_amd64.whl
dinov2-large/.gitattributes
dinov2-large/README.md
dinov2-large/config.json
dinov2-large/model.safetensors
dinov2-large/preprocessor_config.json
dinov2-large/pytorch_model.bin
ema_vae_fp16.safetensors
ffmpeg-N-118385-g0225fe857d-linux64-gpl.tar.xz
ffmpeg-N-121105-ga0936b9769-linux64-gpl.tar.xz
ffmpeg-N-121105-ga0936b9769-win64-gpl.zip
ffmpeg-master-2025-12-03-linux64-gpl.tar.xz
ffmpeg-master-2025-12-03-win64-gpl.zip
ffmpeg-n8.0-latest-linux64-gpl-8.0.tar.xz
ffmpeg-n8.0-latest-linux64-gpl-shared-8.0.tar.xz
ffmpeg-n8.0-latest-win64-gpl-8.0.zip
ffmpeg-n8.0-latest-win64-gpl-shared-8.0.zip
final_qwen_test.zip
flash_attn-2.8.2-cp310-cp310-linux_x86_64.whl
flash_attn-2.8.2-cp310-cp310-win_amd64.whl
flash_attn-2.8.2-cp311-cp311-linux_x86_64.whl
flash_attn-2.8.2-cp311-cp311-win_amd64.whl
flash_attn-2.8.2-cp312-cp312-win_amd64.whl
flash_attn-2.8.2-cp313-cp313-win_amd64.whl
flash_attn-2.8.3+torch2.9.1.cuda13.1-cp310-cp310-linux_x86_64.whl
flash_attn-2.8.3+torch2.9.1.cuda13.1-cp310-cp310-win_amd64.whl
flash_attn-2.8.3+torch2.9.1.cuda13.1-cp311-cp311-linux_x86_64.whl
flash_attn-2.8.3+torch2.9.1.cuda13.1-cp311-cp311-win_amd64.whl
flash_attn-2.8.3+torch2.9.1.cuda13.1-cp312-cp312-win_amd64.whl
flash_attn-2.8.3+torch2.9.1.cuda13.1-cp313-cp313-win_amd64.whl
flash_attn-2.8.3-cp310-cp310-win_amd64.whl
flex_gemm-0.0.1-cp310-cp310-linux_x86_64.whl
flex_gemm-0.0.1-cp310-cp310-win_amd64.whl
flex_gemm-1.0.0-cp311-cp311-win_amd64.whl
flux1-krea-dev.safetensors
flux1_krea_dev_BF16_Q4_1.gguf
flux1_krea_dev_BF16_Q5_1.gguf
flux1_krea_dev_BF16_Q6_K.gguf
flux1_krea_dev_BF16_Q8_0.gguf
gemma_3_12B_it_fpmixed.safetensors
groundingdino-0.1.0-cp310-cp310-linux_x86_64.whl
groundingdino-0.1.0-cp310-cp310-win_amd64.whl
gsplat-1.5.3-cp310-cp310-linux_x86_64.whl
gsplat-1.5.3-cp310-cp310-win_amd64.whl
hunyuan_image_21_vae.ckpt
hunyuan_image_21_vae_refiner.pt
hunyuanimage-refiner.safetensors
hunyuanimage2.1-distilled.safetensors
hunyuanimage2.1.safetensors
insightface-0.7.3-cp310-cp310-linux_x86_64.whl
insightface-0.7.3-cp310-cp310-win_amd64.whl
insightface-0.7.3-cp311-cp311-linux_x86_64.whl
insightface-0.7.3-cp311-cp311-win_amd64.whl
insightface-0.7.3-cp312-cp312-win_amd64.whl
insightface-0.7.3-cp313-cp313-win_amd64.whl
lisans_ders_secimi.zip
ltx-2-19b-embeddings_connector_distill_bf16.safetensors
ltx-2-19b-ic-lora-canny-control.safetensors
ltx-2-19b-ic-lora-depth-control.safetensors
ltx-2-19b-ic-lora-detailer.safetensors
ltx-2-19b-ic-lora-pose-control.safetensors
ltx-2-19b-lora-camera-control-dolly-in.safetensors
ltx-2-19b-lora-camera-control-dolly-left.safetensors
ltx-2-19b-lora-camera-control-dolly-out.safetensors
ltx-2-19b-lora-camera-control-dolly-right.safetensors
ltx-2-19b-lora-camera-control-jib-down.safetensors
ltx-2-19b-lora-camera-control-jib-up.safetensors
ltx-2-19b-lora-camera-control-static.safetensors
models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth
models_clip_open-clip-xlm-roberta-large-vit-huge-14.safetensors
models_t5_umt5-xxl-enc-bf16.pth
moge-1.0.0-py3-none-any.whl
node-v22.20.0-linux-x64.tar.xz
node-v22.20.0-x64.msi
nvdiffrast-0.4.0-cp310-cp310-linux_x86_64.whl
nvdiffrast-0.4.0-cp310-cp310-win_amd64.whl
nvdiffrast-0.4.0-cp311-cp311-linux_x86_64.whl
nvdiffrast-0.4.0-cp311-cp311-win_amd64.whl
o_voxel-0.0.1-cp310-cp310-win_amd64.whl
o_voxel-0.0.2-cp310-cp310-linux_x86_64.whl
o_voxel-0.0.2-cp310-cp310-win_amd64.whl
o_voxel-0.0.2-cp311-cp311-win_amd64.whl
posi_prompt.pth
pytorch3d-0.7.9-cp310-cp310-linux_x86_64.whl
pytorch3d-0.7.9-cp310-cp310-win_amd64.whl
pytorchvideo-0.1.5-py3-none-any.whl
qwen-image-Q4_1.gguf
qwen-image-Q5_1.gguf
qwen-image-Q6_K.gguf
qwen-image-Q8_0.gguf
qwen_2.5_vl_7b_bf16.safetensors
qwen_2.5_vl_7b_fp16.safetensors
qwen_2.5_vl_7b_fp8_scaled.safetensors
qwen_3_4b_fp4_mixed.safetensors
qwen_3_4b_fp8_mixed.safetensors
qwen_image_bf16.safetensors
qwen_image_fp8_e4m3fn.safetensors
qwen_image_vae.safetensors
qwen_train_vae.safetensors
real_app_ss_1.jpg
real_app_ss_1.png
real_app_ss_2.jpg
real_app_ss_2.png
reprompt/chat_template.jinja
reprompt/config.json
reprompt/generation_config.json
reprompt/hy.tiktoken
reprompt/model-00001-of-00004.safetensors
reprompt/model-00002-of-00004.safetensors
reprompt/model-00003-of-00004.safetensors
reprompt/model-00004-of-00004.safetensors
reprompt/model.safetensors.index.json
reprompt/special_tokens_map.json
reprompt/tokenization_hy.py
reprompt/tokenizer_config.json
sageattention-2.2.0+cu128torch2.8.0.post2-cp39-abi3-win_amd64.whl
sageattention-2.2.0+torch2.9.1.cuda13.1-cp39-abi3-linux_x86_64.whl
sageattention-2.2.0+torch2.9.1.cuda13.1-cp39-abi3-win_amd64.whl
sageattention-2.2.0-cp310-cp310-linux_x86_64.whl
sageattention-2.2.0-cp39-abi3-linux_x86_64.whl
sageattention-2.2.0-cp39-abi3-win_amd64.whl
sageattention-2.2.0.post4-cp39-abi3-linux_x86_64.whl
sageattention-2.2.0.post4-cp39-abi3-win_amd64.whl
sam-3d-objects/pipeline.yaml
sam-3d-objects/slat_decoder_gs.ckpt
sam-3d-objects/slat_decoder_gs.yaml
sam-3d-objects/slat_decoder_gs_4.ckpt
sam-3d-objects/slat_decoder_gs_4.yaml
sam-3d-objects/slat_decoder_mesh.ckpt
sam-3d-objects/slat_decoder_mesh.pt
sam-3d-objects/slat_decoder_mesh.yaml
sam-3d-objects/slat_generator.ckpt
sam-3d-objects/slat_generator.yaml
sam-3d-objects/ss_decoder.ckpt
sam-3d-objects/ss_decoder.yaml
sam-3d-objects/ss_encoder.safetensors
sam-3d-objects/ss_encoder.yaml
sam-3d-objects/ss_generator.ckpt
sam-3d-objects/ss_generator.yaml
seedvr2_ema_3b_fp16.safetensors
seedvr2_ema_7b_fp16.safetensors
seedvr2_ema_7b_fp8_e4m3fn_mixed_block35_fp16.safetensors
seedvr2_ema_7b_sharp_fp16.safetensors
seedvr2_ema_7b_sharp_fp8_e4m3fn_mixed_block35_fp16.safetensors
spconv-2.3.8+cu129.torch2.8-py3-none-any.whl
spconv-2.3.8+cu131.torch2.9.1-py3-none-any.whl
spconv-2.3.8-cp311-cp311-win_amd64.whl
spconv-2.3.8-py3-none-any.whl
spconv_cu131-2.3.8+torch2.9.1-cp311-cp311-win_amd64.whl
stringzilla-4.4.1-cp310-cp310-win_amd64.whl
torch_cluster-1.6.3-cp311-cp311-win_amd64.whl
torch_scatter-2.1.2+pt291cu13-cp311-cp311-win_amd64.whl
torchao-0.17.0+git03bdac063-py3-none-any.whl
torchmcubes-0.1.0-cp310-cp310-linux_x86_64.whl
torchmcubes-0.1.0-cp310-cp310-win_amd64.whl
umt5-xxl-enc-bf16.safetensors
umt5-xxl-encoder-F16.gguf
umt5-xxl-encoder-F32.gguf
umt5-xxl-encoder-Q3_K_M.gguf
umt5-xxl-encoder-Q3_K_S.gguf
umt5-xxl-encoder-Q4_K_M.gguf
umt5-xxl-encoder-Q4_K_S.gguf
umt5-xxl-encoder-Q5_K_M.gguf
umt5-xxl-encoder-Q5_K_S.gguf
umt5-xxl-encoder-Q6_K.gguf
umt5-xxl-encoder-Q8_0.gguf
umt5_xxl_fp16.safetensors
umt5_xxl_fp8_e4m3fn_scaled.safetensors
wan2.1-i2v-14b-720p-BF16.gguf
wan2.1-i2v-14b-720p-F16.gguf
wan2.1-i2v-14b-720p-Q3_K_M.gguf
wan2.1-i2v-14b-720p-Q3_K_S.gguf
wan2.1-i2v-14b-720p-Q4_0.gguf
wan2.1-i2v-14b-720p-Q4_1.gguf
wan2.1-i2v-14b-720p-Q4_K_M.gguf
wan2.1-i2v-14b-720p-Q4_K_S.gguf
wan2.1-i2v-14b-720p-Q5_0.gguf
wan2.1-i2v-14b-720p-Q5_1.gguf
wan2.1-i2v-14b-720p-Q5_K_M.gguf
wan2.1-i2v-14b-720p-Q5_K_S.gguf
wan2.1-i2v-14b-720p-Q6_K.gguf
wan2.1-i2v-14b-720p-Q8_0.gguf
wan2.1_t2v_14B_bf16.safetensors
wan2.2_i2v_high_noise_14B_fp16.safetensors
wan2.2_i2v_high_noise_14B_fp8_scaled.safetensors
wan2.2_i2v_low_noise_14B_fp16.safetensors
wan2.2_i2v_low_noise_14B_fp8_scaled.safetensors
wan2.2_t2v_high_noise_14B_fp16.safetensors
wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors
wan2.2_t2v_low_noise_14B_fp16.safetensors
wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors
wan2.2_ti2v_5B_fp16.safetensors
wan2.2_vae.safetensors
wan21_i2v_480p_14B_Q8.gguf
wan21_i2v_480p_14B_fp16.safetensors
wan21_i2v_480p_14B_fp8_e4m3fn.safetensors
wan21_i2v_720p_14B_fp16.safetensors
wan21_i2v_720p_14B_fp8_e4m3fn.safetensors
wan2_1_14B_FusionX_I2V_fp16.safetensors
wan2_1_14B_FusionX_I2V_fp8.safetensors
wan2_1_14B_FusionX_T2V_fp16.safetensors
wan2_1_14B_FusionX_T2V_fp8.safetensors
wan_2.1_vae.safetensors
xformers-0.0.33+c159edc0.d20250906-cp39-abi3-linux_x86_64.whl
xformers-0.0.33+c159edc0.d20250906-cp39-abi3-win_amd64.whl
xformers-0.0.33+c159edc0.d20250906_flsh-cp39-abi3-linux_x86_64.whl
xformers-0.0.34+41531cee.d20260109-cp39-abi3-linux_x86_64.whl
xformers-0.0.34+41531cee.d20260109-cp39-abi3-win_amd64.whl
yuksek_lisans_ders_secimi.zip