返回模型
说明文档
CLIP 模型的 ONNX 导出版本。
模型
共导出 4 个模型。
| 名称 | 图像 (参数量/FLOPS) | 图像尺寸 | 图像宽度 (编码器/嵌入) | 文本 (参数量/FLOPS) | 文本宽度 (编码器/嵌入) | 创建时间 |
|---|---|---|---|---|---|---|
| openai/clip-vit-large-patch14-336 | 302.9M / 174.7G | 336 | 1024 / 768 | 85.1M / 1.2G | 768 / 768 | 2022-04-22 |
| openai/clip-vit-large-patch14 | 302.9M / 77.8G | 224 | 1024 / 768 | 85.1M / 1.2G | 768 / 768 | 2022-03-03 |
| openai/clip-vit-base-patch16 | 85.6M / 16.9G | 224 | 768 / 512 | 37.8M / 529.2M | 512 / 512 | 2022-03-03 |
| openai/clip-vit-base-patch32 | 87.4M / 4.4G | 224 | 768 / 512 | 37.8M / 529.2M | 512 / 512 | 2022-03-03 |
deepghs/clip_onnx
作者 deepghs
zero-shot-classification
dghs-realutils
↓ 0
♥ 2
创建时间: 2025-01-28 18:20:24+00:00
更新时间: 2025-01-28 18:29:13+00:00
在 Hugging Face 上查看文件 (23)
.gitattributes
README.md
models.parquet
openai/clip-vit-base-patch16/image_encode.onnx
ONNX
openai/clip-vit-base-patch16/meta.json
openai/clip-vit-base-patch16/preprocessor.json
openai/clip-vit-base-patch16/text_encode.onnx
ONNX
openai/clip-vit-base-patch16/tokenizer.json
openai/clip-vit-base-patch32/image_encode.onnx
ONNX
openai/clip-vit-base-patch32/meta.json
openai/clip-vit-base-patch32/preprocessor.json
openai/clip-vit-base-patch32/text_encode.onnx
ONNX
openai/clip-vit-base-patch32/tokenizer.json
openai/clip-vit-large-patch14-336/image_encode.onnx
ONNX
openai/clip-vit-large-patch14-336/meta.json
openai/clip-vit-large-patch14-336/preprocessor.json
openai/clip-vit-large-patch14-336/text_encode.onnx
ONNX
openai/clip-vit-large-patch14-336/tokenizer.json
openai/clip-vit-large-patch14/image_encode.onnx
ONNX
openai/clip-vit-large-patch14/meta.json
openai/clip-vit-large-patch14/preprocessor.json
openai/clip-vit-large-patch14/text_encode.onnx
ONNX
openai/clip-vit-large-patch14/tokenizer.json