说明文档
SMALL-100 模型
SMaLL-100 是一个紧凑且快速的大规模多语言机器翻译模型,覆盖超过 10K 种语言对,在保持与 M2M-100 相当的竞争性结果的同时,体积更小、速度更快。该模型在这篇论文(已被 EMNLP2022 接收)中介绍,并最初在这个仓库中发布。
模型架构和配置与 M2M-100 实现相同,但分词器经过修改以调整语言代码。因此,您目前需要从 tokenization_small100.py 文件本地加载分词器。
演示: https://huggingface.co/spaces/alirezamsh/small100
注意: SMALL100Tokenizer 需要 sentencepiece,因此请确保通过以下命令安装:
pip install sentencepiece
- 监督训练
SMaLL-100 是一个用于翻译任务的序列到序列模型。模型的输入为 source:[tgt_lang_code] + src_tokens + [EOS],输出为 target: tgt_tokens + [EOS]。
以下是监督训练的示例:
from transformers import M2M100ForConditionalGeneration
from tokenization_small100 import SMALL100Tokenizer
model = M2M100ForConditionalGeneration.from_pretrained("alirezamsh/small100")
tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100", tgt_lang="fr")
src_text = "Life is like a box of chocolates."
tgt_text = "La vie est comme une boîte de chocolat."
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
loss = model(**model_inputs).loss # forward pass
训练数据可根据请求提供。
- 生成
生成时使用束搜索大小为 5,最大目标长度为 256。
from transformers import M2M100ForConditionalGeneration
from tokenization_small100 import SMALL100Tokenizer
hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।"
chinese_text = "生活就像一盒巧克力。"
model = M2M100ForConditionalGeneration.from_pretrained("alirezamsh/small100")
tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100")
# 将印地语翻译成法语
tokenizer.tgt_lang = "fr"
encoded_hi = tokenizer(hi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "La vie est comme une boîte de chocolat."
# 将中文翻译成英语
tokenizer.tgt_lang = "en"
encoded_zh = tokenizer(chinese_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_zh)
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Life is like a box of chocolate."
- 评估
请参考原始仓库了解 spBLEU 计算。
- 支持的语言
南非荷兰语, 阿姆哈拉语, 阿拉伯语, 阿斯图里亚斯语, 阿塞拜疆语, 巴什基尔语, 白俄罗斯语, 保加利亚语, 孟加拉语, 布列塔尼语, 波斯尼亚语, 加泰罗尼亚语; 瓦伦西亚语, 宿务语, 捷克语, 威尔士语, 丹麦语, 德语, 希腊语, 英语, 西班牙语, 爱沙尼亚语, 波斯语, 富拉语, 芬兰语, 法语, 西弗里西亚语, 爱尔兰语, 盖尔语; 苏格兰盖尔语, 加利西亚语, 古吉拉特语, 豪萨语, 希伯来语, 印地语, 克罗地亚语, 海地语; 海地克里奥尔语, 匈牙利语, 亚美尼亚语, 印度尼西亚语, 伊博语, 伊洛卡诺语, 冰岛语, 意大利语, 日语, 爪哇语, 格鲁吉亚语, 哈萨克语, 中高棉语, 卡纳达语, 韩语, 卢森堡语; 列支敦士登语, 卢干达语, 林加拉语, 老挝语, 立陶宛语, 拉脱维亚语, 马达加斯加语, 马其顿语, 马拉雅拉姆语, 蒙古语, 马拉地语, 马来语, 缅甸语, 尼泊尔语, 荷兰语; 弗拉芒语, 挪威语, 北索托语, 奥克语(1500年后), 奥里亚语, 旁遮普语, 波兰语, 普什图语, 葡萄牙语, 罗马尼亚语; 摩尔多瓦语, 俄语, 信德语, 僧伽罗语, 斯洛伐克语, 斯洛文尼亚语, 索马里语, 阿尔巴尼亚语, 塞尔维亚语, 斯瓦蒂语, 巽他语, 瑞典语, 斯瓦希里语, 泰米尔语, 泰语, 他加禄语, 茨瓦纳语, 土耳其语, 乌克兰语, 乌尔都语, 乌兹别克语, 越南语, 沃洛夫语, 科萨语, 意第绪语, 约鲁巴语, 中文, 祖鲁语
引用
如果您在研究中使用此模型,请引用以下工作:
@inproceedings{mohammadshahi-etal-2022-small,
title = "{SM}a{LL}-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages",
author = "Mohammadshahi, Alireza and
Nikoulina, Vassilina and
Berard, Alexandre and
Brun, Caroline and
Henderson, James and
Besacier, Laurent",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.571",
pages = "8348--8359",
abstract = "In recent years, multilingual machine translation models have achieved promising performance on low-resource language pairs by sharing information between similar languages, thus enabling zero-shot translation. To overcome the {``}curse of multilinguality{''}, these models often opt for scaling up the number of parameters, which makes their use in resource-constrained environments challenging. We introduce SMaLL-100, a distilled version of the M2M-100(12B) model, a massively multilingual machine translation model covering 100 languages. We train SMaLL-100 with uniform sampling across all language pairs and therefore focus on preserving the performance of low-resource languages. We evaluate SMaLL-100 on different low-resource benchmarks: FLORES-101, Tatoeba, and TICO-19 and demonstrate that it outperforms previous massively multilingual models of comparable sizes (200-600M) while improving inference latency and memory usage. Additionally, our model achieves comparable results to M2M-100 (1.2B), while being 3.6x smaller and 4.3x faster at inference.",
}
@inproceedings{mohammadshahi-etal-2022-compressed,
title = "What Do Compressed Multilingual Machine Translation Models Forget?",
author = "Mohammadshahi, Alireza and
Nikoulina, Vassilina and
Berard, Alexandre and
Brun, Caroline and
Henderson, James and
Besacier, Laurent",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.317",
pages = "4308--4329",
abstract = "Recently, very large pre-trained models achieve state-of-the-art results in various natural language processing (NLP) tasks, but their size makes it more challenging to apply them in resource-constrained environments. Compression techniques allow to drastically reduce the size of the models and therefore their inference time with negligible impact on top-tier metrics. However, the general performance averaged across multiple tasks and/or languages may hide a drastic performance drop on under-represented features, which could result in the amplification of biases encoded by the models. In this work, we assess the impact of compression methods on Multilingual Neural Machine Translation models (MNMT) for various language groups, gender, and semantic biases by extensive analysis of compressed models on different machine translation benchmarks, i.e. FLORES-101, MT-Gender, and DiBiMT. We show that the performance of under-represented languages drops significantly, while the average BLEU metric only slightly decreases. Interestingly, the removal of noisy memorization with compression leads to a significant improvement for some medium-resource languages. Finally, we demonstrate that compression amplifies intrinsic gender and semantic biases, even in high-resource languages.",
}
fukayatti0/small100-quantized-int8
作者 fukayatti0
创建时间: 2025-05-31 07:23:40+00:00
更新时间: 2025-05-31 07:37:44+00:00
在 Hugging Face 上查看