ONNX 模型库
返回模型

说明文档

The README content you've provided is already in Chinese (Simplified). It's not in English or another language that needs translation to Chinese.

The content contains:

  • Chinese headers (项目简介, 功能, 使用方法, 注意事项, 贡献)
  • Chinese body text throughout
  • English technical terms (live2d, STT, LLM, PyTorch, Transformers, FunASR, etc.) which is normal and expected in technical documentation

Possible scenarios:

  1. You meant to translate FROM Chinese TO another language (e.g., English)?
  2. You pasted the wrong content — maybe you meant to paste an English README?
  3. You want to refine/improve the existing Chinese translation?

Please clarify which language you'd like me to translate this content to, or provide the correct source content if this was a paste error.

Shiina-Mahiru/Firefly-Neko

作者 Shiina-Mahiru

audio-text-to-text
↓ 0 ♥ 5

创建时间: 2025-02-06 03:49:36+00:00

更新时间: 2025-02-25 09:51:24+00:00

在 Hugging Face 上查看

文件 (267)

.gitattributes
Firefly-desktop\Firefly.4096\texture_00.png
Firefly-desktop\Firefly.cdi3.json
Firefly-desktop\Firefly.moc3
Firefly-desktop\Firefly.model3.json
Firefly-desktop\Firefly.physics3.json
Firefly-desktop\animations\Scene0.motion3.json
Firefly-desktop\animations\Scene02.motion3.json
Firefly-desktop\animations\Scene22.motion3.json
Firefly-desktop\animations\Scene26710.motion3.json
Firefly-desktop\animations\Scene4.motion3.json
Firefly-desktop\expression\expression1.exp3.json
Firefly-desktop\expression\expression2.exp3.json
Firefly-desktop\expression\expression3.exp3.json
Firefly-desktop\voice\meow.mp3
Firefly-desktop\voice\会萤的.mp3
Firefly-desktop\voice\无梦-若风 - 使一颗心免于哀伤-流萤.mp3
Firefly-desktop\voice\音频回正.mp3
GPT-SoVITS-v2-240821.7z
GPT_SoVITS\pretrained_models\.gitignore
GPT_SoVITS\pretrained_models\chinese-hubert-base\config.json
GPT_SoVITS\pretrained_models\chinese-hubert-base\preprocessor_config.json
GPT_SoVITS\pretrained_models\chinese-hubert-base\pytorch_model.bin
GPT_SoVITS\pretrained_models\chinese-roberta-wwm-ext-large\config.json
GPT_SoVITS\pretrained_models\chinese-roberta-wwm-ext-large\pytorch_model.bin
GPT_SoVITS\pretrained_models\chinese-roberta-wwm-ext-large\tokenizer.json
GPT_SoVITS\pretrained_models\gsv-v2final-pretrained\s1bert25hz-5kh-longer-epoch=12-step=369668.ckpt
GPT_SoVITS\pretrained_models\gsv-v2final-pretrained\s2D2333k.pth
GPT_SoVITS\pretrained_models\gsv-v2final-pretrained\s2G2333k.pth
GPT_SoVITS\pretrained_models\s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt
GPT_SoVITS\pretrained_models\s2D488k.pth
GPT_SoVITS\pretrained_models\s2G488k.pth
GPT_weights_v2\流萤-e10.ckpt
README.md
SoVITS_weights_v2\流萤_e15_s810.pth
__pycache__\jieba_fast_functions_py3.cpython-312.pyc
__pycache__\qwen_firefly_neko_stt_multi.cpython-312.pyc
background.txt
dialog_history.json
example.wav
ffmpeg.exe
ffmpeg\__init__.py
ffmpeg\__pycache__\__init__.cpython-312.pyc
ffmpeg\__pycache__\__init__.cpython-39.pyc
ffmpeg\__pycache__\_ffmpeg.cpython-312.pyc
ffmpeg\__pycache__\_ffmpeg.cpython-39.pyc
ffmpeg\__pycache__\_filters.cpython-312.pyc
ffmpeg\__pycache__\_filters.cpython-39.pyc
ffmpeg\__pycache__\_probe.cpython-312.pyc
ffmpeg\__pycache__\_probe.cpython-39.pyc
ffmpeg\__pycache__\_run.cpython-312.pyc
ffmpeg\__pycache__\_run.cpython-39.pyc
ffmpeg\__pycache__\_utils.cpython-312.pyc
ffmpeg\__pycache__\_utils.cpython-39.pyc
ffmpeg\__pycache__\_view.cpython-312.pyc
ffmpeg\__pycache__\_view.cpython-39.pyc
ffmpeg\__pycache__\dag.cpython-312.pyc
ffmpeg\__pycache__\dag.cpython-39.pyc
ffmpeg\__pycache__\nodes.cpython-312.pyc
ffmpeg\__pycache__\nodes.cpython-39.pyc
ffmpeg\_ffmpeg.py
ffmpeg\_filters.py
ffmpeg\_probe.py
ffmpeg\_run.py
ffmpeg\_utils.py
ffmpeg\_view.py
ffmpeg\dag.py
ffmpeg\nodes.py
ffprobe.exe
firefly-neko-live2d.py
firefly-neko-stt-live2d-multi.py
firefly-neko-stt-multi.py
firefly-neko-stt.py
firefly-neko-txt.py
firefly\ref_audio\example.wav
model\Qwen2.5-0.5B-Instruct\config.json
model\Qwen2.5-0.5B-Instruct\generation_config.json
model\Qwen2.5-0.5B-Instruct\merges.txt
model\Qwen2.5-0.5B-Instruct\model.safetensors
model\Qwen2.5-0.5B-Instruct\tokenizer.json
model\Qwen2.5-0.5B-Instruct\tokenizer_config.json
model\Qwen2.5-0.5B-Instruct\vocab.json
model\Qwen2.5-7B-Instruct\config.json
model\Qwen2.5-7B-Instruct\generation_config.json
model\Qwen2.5-7B-Instruct\merges.txt
model\Qwen2.5-7B-Instruct\model-00001-of-00004.safetensors
model\Qwen2.5-7B-Instruct\model-00002-of-00004.safetensors
model\Qwen2.5-7B-Instruct\model-00003-of-00004.safetensors
model\Qwen2.5-7B-Instruct\model-00004-of-00004.safetensors
model\Qwen2.5-7B-Instruct\model.safetensors.index.json
model\Qwen2.5-7B-Instruct\tokenizer.json
model\Qwen2.5-7B-Instruct\tokenizer_config.json
model\Qwen2.5-7B-Instruct\vocab.json
model\punc_ct-transformer_cn-en-common-vocab471067-large\.mdl
model\punc_ct-transformer_cn-en-common-vocab471067-large\.msc
model\punc_ct-transformer_cn-en-common-vocab471067-large\.mv
model\punc_ct-transformer_cn-en-common-vocab471067-large\README.md
model\punc_ct-transformer_cn-en-common-vocab471067-large\config.yaml
model\punc_ct-transformer_cn-en-common-vocab471067-large\configuration.json
model\punc_ct-transformer_cn-en-common-vocab471067-large\example\punc_example.txt
model\punc_ct-transformer_cn-en-common-vocab471067-large\fig\struct.png
model\punc_ct-transformer_cn-en-common-vocab471067-large\jieba.c.dict
model\punc_ct-transformer_cn-en-common-vocab471067-large\jieba_usr_dict
model\punc_ct-transformer_cn-en-common-vocab471067-large\model.pt
model\punc_ct-transformer_cn-en-common-vocab471067-large\tokens.json
model\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\.DS_Store
model\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\.mdl
model\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\.msc
model\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\README.md
model\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\config.yaml
model\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\configuration.json
model\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\example\punc_example.txt
model\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\fig\struct.png
model\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\tokens.json
model\speech_fsmn_vad_zh-cn-16k-common-pytorch\.mdl
model\speech_fsmn_vad_zh-cn-16k-common-pytorch\.msc
model\speech_fsmn_vad_zh-cn-16k-common-pytorch\.mv
model\speech_fsmn_vad_zh-cn-16k-common-pytorch\README.md
model\speech_fsmn_vad_zh-cn-16k-common-pytorch\am.mvn
model\speech_fsmn_vad_zh-cn-16k-common-pytorch\config.yaml
model\speech_fsmn_vad_zh-cn-16k-common-pytorch\configuration.json
model\speech_fsmn_vad_zh-cn-16k-common-pytorch\example\vad_example.wav
model\speech_fsmn_vad_zh-cn-16k-common-pytorch\fig\struct.png
model\speech_fsmn_vad_zh-cn-16k-common-pytorch\model.pt
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\.mdl
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\.msc
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\.mv
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\README.md
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\am.mvn
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\asr_example_hotword.wav
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\config.yaml
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\configuration.json
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\example\asr_example.wav
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\example\hotword.txt
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\fig\res.png
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\fig\seaco.png
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\model.pt
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\seg_dict
model\speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\tokens.json
output\output.wav
readme.md
ref_text.txt
requirements.txt
target_text.txt
tools\__init__.py
tools\__pycache__\__init__.cpython-312.pyc
tools\__pycache__\__init__.cpython-39.pyc
tools\__pycache__\my_utils.cpython-312.pyc
tools\__pycache__\my_utils.cpython-39.pyc
tools\asr\__pycache__\config.cpython-39.pyc
tools\asr\config.py
tools\asr\fasterwhisper_asr.py
tools\asr\funasr_asr.py
tools\asr\models\.gitignore
tools\asr\models\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\.DS_Store
tools\asr\models\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\.mdl
tools\asr\models\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\.msc
tools\asr\models\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\README.md
tools\asr\models\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\config.yaml
tools\asr\models\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\configuration.json
tools\asr\models\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\example\punc_example.txt
tools\asr\models\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\fig\struct.png
tools\asr\models\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\model.pt
tools\asr\models\punc_ct-transformer_zh-cn-common-vocab272727-pytorch\tokens.json
tools\asr\models\speech_fsmn_vad_zh-cn-16k-common-pytorch\.mdl
tools\asr\models\speech_fsmn_vad_zh-cn-16k-common-pytorch\.msc
tools\asr\models\speech_fsmn_vad_zh-cn-16k-common-pytorch\README.md
tools\asr\models\speech_fsmn_vad_zh-cn-16k-common-pytorch\am.mvn
tools\asr\models\speech_fsmn_vad_zh-cn-16k-common-pytorch\config.yaml
tools\asr\models\speech_fsmn_vad_zh-cn-16k-common-pytorch\configuration.json
tools\asr\models\speech_fsmn_vad_zh-cn-16k-common-pytorch\example\vad_example.wav
tools\asr\models\speech_fsmn_vad_zh-cn-16k-common-pytorch\fig\struct.png
tools\asr\models\speech_fsmn_vad_zh-cn-16k-common-pytorch\model.pt
tools\asr\models\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\.mdl
tools\asr\models\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\.msc
tools\asr\models\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\README.md
tools\asr\models\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\am.mvn
tools\asr\models\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\config.yaml
tools\asr\models\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\configuration.json
tools\asr\models\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\example\asr_example.wav
tools\asr\models\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\fig\struct.png
tools\asr\models\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\model.pt
tools\asr\models\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\seg_dict
tools\asr\models\speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch\tokens.json
tools\cmd-denoise.py
tools\denoise-model\.gitignore
tools\i18n\__pycache__\i18n.cpython-312.pyc
tools\i18n\__pycache__\i18n.cpython-39.pyc
tools\i18n\i18n.py
tools\i18n\locale\en_US.json
tools\i18n\locale\es_ES.json
tools\i18n\locale\fr_FR.json
tools\i18n\locale\it_IT.json
tools\i18n\locale\ja_JP.json
tools\i18n\locale\ko_KR.json
tools\i18n\locale\pt_BR.json
tools\i18n\locale\ru_RU.json
tools\i18n\locale\tr_TR.json
tools\i18n\locale\zh_CN.json
tools\i18n\locale\zh_HK.json
tools\i18n\locale\zh_SG.json
tools\i18n\locale\zh_TW.json
tools\i18n\locale_diff.py
tools\i18n\scan_i18n.py
tools\my_utils.py
tools\slice_audio.py
tools\slicer2.py
tools\subfix_webui.py
tools\uvr5\bs_roformer\__init__.py
tools\uvr5\bs_roformer\attend.py
tools\uvr5\bs_roformer\bs_roformer.py
tools\uvr5\bsroformer.py
tools\uvr5\lib\lib_v5\dataset.py
tools\uvr5\lib\lib_v5\layers.py
tools\uvr5\lib\lib_v5\layers_123812KB.py
tools\uvr5\lib\lib_v5\layers_123821KB.py
tools\uvr5\lib\lib_v5\layers_33966KB.py
tools\uvr5\lib\lib_v5\layers_537227KB.py
tools\uvr5\lib\lib_v5\layers_537238KB.py
tools\uvr5\lib\lib_v5\layers_new.py
tools\uvr5\lib\lib_v5\model_param_init.py
tools\uvr5\lib\lib_v5\modelparams\1band_sr16000_hl512.json
tools\uvr5\lib\lib_v5\modelparams\1band_sr32000_hl512.json
tools\uvr5\lib\lib_v5\modelparams\1band_sr33075_hl384.json
tools\uvr5\lib\lib_v5\modelparams\1band_sr44100_hl1024.json
tools\uvr5\lib\lib_v5\modelparams\1band_sr44100_hl256.json
tools\uvr5\lib\lib_v5\modelparams\1band_sr44100_hl512.json
tools\uvr5\lib\lib_v5\modelparams\1band_sr44100_hl512_cut.json
tools\uvr5\lib\lib_v5\modelparams\2band_32000.json
tools\uvr5\lib\lib_v5\modelparams\2band_44100_lofi.json
tools\uvr5\lib\lib_v5\modelparams\2band_48000.json
tools\uvr5\lib\lib_v5\modelparams\3band_44100.json
tools\uvr5\lib\lib_v5\modelparams\3band_44100_mid.json
tools\uvr5\lib\lib_v5\modelparams\3band_44100_msb2.json
tools\uvr5\lib\lib_v5\modelparams\4band_44100.json
tools\uvr5\lib\lib_v5\modelparams\4band_44100_mid.json
tools\uvr5\lib\lib_v5\modelparams\4band_44100_msb.json
tools\uvr5\lib\lib_v5\modelparams\4band_44100_msb2.json
tools\uvr5\lib\lib_v5\modelparams\4band_44100_reverse.json
tools\uvr5\lib\lib_v5\modelparams\4band_44100_sw.json
tools\uvr5\lib\lib_v5\modelparams\4band_v2.json
tools\uvr5\lib\lib_v5\modelparams\4band_v2_sn.json
tools\uvr5\lib\lib_v5\modelparams\4band_v3.json
tools\uvr5\lib\lib_v5\modelparams\ensemble.json
tools\uvr5\lib\lib_v5\nets.py
tools\uvr5\lib\lib_v5\nets_123812KB.py
tools\uvr5\lib\lib_v5\nets_123821KB.py
tools\uvr5\lib\lib_v5\nets_33966KB.py
tools\uvr5\lib\lib_v5\nets_537227KB.py
tools\uvr5\lib\lib_v5\nets_537238KB.py
tools\uvr5\lib\lib_v5\nets_61968KB.py
tools\uvr5\lib\lib_v5\nets_new.py
tools\uvr5\lib\lib_v5\spec_utils.py
tools\uvr5\lib\name_params.json
tools\uvr5\lib\utils.py
tools\uvr5\mdxnet.py
tools\uvr5\uvr5_weights\.gitignore
tools\uvr5\uvr5_weights\HP2_all_vocals.pth
tools\uvr5\uvr5_weights\HP5_only_main_vocal.pth
tools\uvr5\uvr5_weights\VR-DeEchoAggressive.pth
tools\uvr5\uvr5_weights\VR-DeEchoDeReverb.pth
tools\uvr5\uvr5_weights\VR-DeEchoNormal.pth
tools\uvr5\uvr5_weights\model_bs_roformer_ep_317_sdr_12.9755.ckpt
tools\uvr5\uvr5_weights\onnx_dereverb_By_FoxJoy\vocals.onnx ONNX
tools\uvr5\vr.py
tools\uvr5\webui.py
weight.json