pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
| tokens_length
sequencelengths 1
723
| input_texts
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
translation | transformers |
### opus-mt-en-mh
* source languages: en
* target languages: mh
* OPUS readme: [en-mh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mh/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mh/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mh/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.mh | 29.7 | 0.479 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-mh | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #mh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-mh
* source languages: en
* target languages: mh
* OPUS readme: en-mh
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.7, chr-F: 0.479
| [
"### opus-mt-en-mh\n\n\n* source languages: en\n* target languages: mh\n* OPUS readme: en-mh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.479"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-mh\n\n\n* source languages: en\n* target languages: mh\n* OPUS readme: en-mh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.479"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-mh\n\n\n* source languages: en\n* target languages: mh\n* OPUS readme: en-mh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.7, chr-F: 0.479"
] |
translation | transformers |
### opus-mt-en-mk
* source languages: en
* target languages: mk
* OPUS readme: [en-mk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mk/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mk/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mk/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.mk | 52.1 | 0.683 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-mk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #mk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-mk
* source languages: en
* target languages: mk
* OPUS readme: en-mk
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 52.1, chr-F: 0.683
| [
"### opus-mt-en-mk\n\n\n* source languages: en\n* target languages: mk\n* OPUS readme: en-mk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.1, chr-F: 0.683"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-mk\n\n\n* source languages: en\n* target languages: mk\n* OPUS readme: en-mk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.1, chr-F: 0.683"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-mk\n\n\n* source languages: en\n* target languages: mk\n* OPUS readme: en-mk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.1, chr-F: 0.683"
] |
translation | transformers |
### eng-mkh
* source group: English
* target group: Mon-Khmer languages
* OPUS readme: [eng-mkh](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mkh/README.md)
* model: transformer
* source language(s): eng
* target language(s): kha khm khm_Latn mnw vie vie_Hani
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-kha.eng.kha | 0.1 | 0.015 |
| Tatoeba-test.eng-khm.eng.khm | 0.2 | 0.226 |
| Tatoeba-test.eng-mnw.eng.mnw | 0.7 | 0.003 |
| Tatoeba-test.eng.multi | 16.5 | 0.330 |
| Tatoeba-test.eng-vie.eng.vie | 33.7 | 0.513 |
### System Info:
- hf_name: eng-mkh
- source_languages: eng
- target_languages: mkh
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mkh/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'vi', 'km', 'mkh']
- src_constituents: {'eng'}
- tgt_constituents: {'vie_Hani', 'mnw', 'vie', 'kha', 'khm_Latn', 'khm'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mkh/opus-2020-07-27.test.txt
- src_alpha3: eng
- tgt_alpha3: mkh
- short_pair: en-mkh
- chrF2_score: 0.33
- bleu: 16.5
- brevity_penalty: 1.0
- ref_len: 34734.0
- src_name: English
- tgt_name: Mon-Khmer languages
- train_date: 2020-07-27
- src_alpha2: en
- tgt_alpha2: mkh
- prefer_old: False
- long_pair: eng-mkh
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "vi", "km", "mkh"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-mkh | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"vi",
"km",
"mkh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"vi",
"km",
"mkh"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #vi #km #mkh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-mkh
* source group: English
* target group: Mon-Khmer languages
* OPUS readme: eng-mkh
* model: transformer
* source language(s): eng
* target language(s): kha khm khm\_Latn mnw vie vie\_Hani
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 0.1, chr-F: 0.015
testset: URL, BLEU: 0.2, chr-F: 0.226
testset: URL, BLEU: 0.7, chr-F: 0.003
testset: URL, BLEU: 16.5, chr-F: 0.330
testset: URL, BLEU: 33.7, chr-F: 0.513
### System Info:
* hf\_name: eng-mkh
* source\_languages: eng
* target\_languages: mkh
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'vi', 'km', 'mkh']
* src\_constituents: {'eng'}
* tgt\_constituents: {'vie\_Hani', 'mnw', 'vie', 'kha', 'khm\_Latn', 'khm'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: mkh
* short\_pair: en-mkh
* chrF2\_score: 0.33
* bleu: 16.5
* brevity\_penalty: 1.0
* ref\_len: 34734.0
* src\_name: English
* tgt\_name: Mon-Khmer languages
* train\_date: 2020-07-27
* src\_alpha2: en
* tgt\_alpha2: mkh
* prefer\_old: False
* long\_pair: eng-mkh
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-mkh\n\n\n* source group: English\n* target group: Mon-Khmer languages\n* OPUS readme: eng-mkh\n* model: transformer\n* source language(s): eng\n* target language(s): kha khm khm\\_Latn mnw vie vie\\_Hani\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.1, chr-F: 0.015\ntestset: URL, BLEU: 0.2, chr-F: 0.226\ntestset: URL, BLEU: 0.7, chr-F: 0.003\ntestset: URL, BLEU: 16.5, chr-F: 0.330\ntestset: URL, BLEU: 33.7, chr-F: 0.513",
"### System Info:\n\n\n* hf\\_name: eng-mkh\n* source\\_languages: eng\n* target\\_languages: mkh\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'vi', 'km', 'mkh']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'vie\\_Hani', 'mnw', 'vie', 'kha', 'khm\\_Latn', 'khm'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: mkh\n* short\\_pair: en-mkh\n* chrF2\\_score: 0.33\n* bleu: 16.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 34734.0\n* src\\_name: English\n* tgt\\_name: Mon-Khmer languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: en\n* tgt\\_alpha2: mkh\n* prefer\\_old: False\n* long\\_pair: eng-mkh\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #vi #km #mkh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-mkh\n\n\n* source group: English\n* target group: Mon-Khmer languages\n* OPUS readme: eng-mkh\n* model: transformer\n* source language(s): eng\n* target language(s): kha khm khm\\_Latn mnw vie vie\\_Hani\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.1, chr-F: 0.015\ntestset: URL, BLEU: 0.2, chr-F: 0.226\ntestset: URL, BLEU: 0.7, chr-F: 0.003\ntestset: URL, BLEU: 16.5, chr-F: 0.330\ntestset: URL, BLEU: 33.7, chr-F: 0.513",
"### System Info:\n\n\n* hf\\_name: eng-mkh\n* source\\_languages: eng\n* target\\_languages: mkh\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'vi', 'km', 'mkh']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'vie\\_Hani', 'mnw', 'vie', 'kha', 'khm\\_Latn', 'khm'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: mkh\n* short\\_pair: en-mkh\n* chrF2\\_score: 0.33\n* bleu: 16.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 34734.0\n* src\\_name: English\n* tgt\\_name: Mon-Khmer languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: en\n* tgt\\_alpha2: mkh\n* prefer\\_old: False\n* long\\_pair: eng-mkh\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
56,
267,
441
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #vi #km #mkh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-mkh\n\n\n* source group: English\n* target group: Mon-Khmer languages\n* OPUS readme: eng-mkh\n* model: transformer\n* source language(s): eng\n* target language(s): kha khm khm\\_Latn mnw vie vie\\_Hani\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.1, chr-F: 0.015\ntestset: URL, BLEU: 0.2, chr-F: 0.226\ntestset: URL, BLEU: 0.7, chr-F: 0.003\ntestset: URL, BLEU: 16.5, chr-F: 0.330\ntestset: URL, BLEU: 33.7, chr-F: 0.513### System Info:\n\n\n* hf\\_name: eng-mkh\n* source\\_languages: eng\n* target\\_languages: mkh\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'vi', 'km', 'mkh']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'vie\\_Hani', 'mnw', 'vie', 'kha', 'khm\\_Latn', 'khm'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: mkh\n* short\\_pair: en-mkh\n* chrF2\\_score: 0.33\n* bleu: 16.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 34734.0\n* src\\_name: English\n* tgt\\_name: Mon-Khmer languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: en\n* tgt\\_alpha2: mkh\n* prefer\\_old: False\n* long\\_pair: eng-mkh\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-ml
* source languages: en
* target languages: ml
* OPUS readme: [en-ml](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ml/README.md)
* dataset: opus+bt+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt+bt-2020-04-28.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.zip)
* test set translations: [opus+bt+bt-2020-04-28.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.test.txt)
* test set scores: [opus+bt+bt-2020-04-28.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ml/opus+bt+bt-2020-04-28.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ml | 19.1 | 0.536 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ml | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ml",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ml #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ml
* source languages: en
* target languages: ml
* OPUS readme: en-ml
* dataset: opus+bt+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: opus+bt+URL
* test set translations: opus+bt+URL
* test set scores: opus+bt+URL
Benchmarks
----------
testset: URL, BLEU: 19.1, chr-F: 0.536
| [
"### opus-mt-en-ml\n\n\n* source languages: en\n* target languages: ml\n* OPUS readme: en-ml\n* dataset: opus+bt+bt\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: opus+bt+URL\n* test set translations: opus+bt+URL\n* test set scores: opus+bt+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.1, chr-F: 0.536"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ml #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ml\n\n\n* source languages: en\n* target languages: ml\n* OPUS readme: en-ml\n* dataset: opus+bt+bt\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: opus+bt+URL\n* test set translations: opus+bt+URL\n* test set scores: opus+bt+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.1, chr-F: 0.536"
] | [
51,
122
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ml #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ml\n\n\n* source languages: en\n* target languages: ml\n* OPUS readme: en-ml\n* dataset: opus+bt+bt\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: opus+bt+URL\n* test set translations: opus+bt+URL\n* test set scores: opus+bt+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.1, chr-F: 0.536"
] |
translation | transformers |
### opus-mt-en-mos
* source languages: en
* target languages: mos
* OPUS readme: [en-mos](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mos/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mos/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.mos | 26.9 | 0.417 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-mos | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mos",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #mos #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-mos
* source languages: en
* target languages: mos
* OPUS readme: en-mos
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.9, chr-F: 0.417
| [
"### opus-mt-en-mos\n\n\n* source languages: en\n* target languages: mos\n* OPUS readme: en-mos\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.417"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mos #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-mos\n\n\n* source languages: en\n* target languages: mos\n* OPUS readme: en-mos\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.417"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mos #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-mos\n\n\n* source languages: en\n* target languages: mos\n* OPUS readme: en-mos\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.9, chr-F: 0.417"
] |
translation | transformers |
### opus-mt-en-mr
* source languages: en
* target languages: mr
* OPUS readme: [en-mr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mr/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mr/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mr/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.mr | 22.0 | 0.397 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-mr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #mr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-mr
* source languages: en
* target languages: mr
* OPUS readme: en-mr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.0, chr-F: 0.397
| [
"### opus-mt-en-mr\n\n\n* source languages: en\n* target languages: mr\n* OPUS readme: en-mr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.397"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-mr\n\n\n* source languages: en\n* target languages: mr\n* OPUS readme: en-mr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.397"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-mr\n\n\n* source languages: en\n* target languages: mr\n* OPUS readme: en-mr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.397"
] |
translation | transformers |
### opus-mt-en-mt
* source languages: en
* target languages: mt
* OPUS readme: [en-mt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-mt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-mt/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mt/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-mt/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.mt | 47.5 | 0.640 |
| Tatoeba.en.mt | 25.0 | 0.620 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-mt | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #mt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-mt
* source languages: en
* target languages: mt
* OPUS readme: en-mt
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 47.5, chr-F: 0.640
testset: URL, BLEU: 25.0, chr-F: 0.620
| [
"### opus-mt-en-mt\n\n\n* source languages: en\n* target languages: mt\n* OPUS readme: en-mt\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.5, chr-F: 0.640\ntestset: URL, BLEU: 25.0, chr-F: 0.620"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-mt\n\n\n* source languages: en\n* target languages: mt\n* OPUS readme: en-mt\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.5, chr-F: 0.640\ntestset: URL, BLEU: 25.0, chr-F: 0.620"
] | [
51,
127
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-mt\n\n\n* source languages: en\n* target languages: mt\n* OPUS readme: en-mt\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.5, chr-F: 0.640\ntestset: URL, BLEU: 25.0, chr-F: 0.620"
] |
translation | transformers |
### eng-mul
* source group: English
* target group: Multiple languages
* OPUS readme: [eng-mul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mul/README.md)
* model: transformer
* source language(s): eng
* target language(s): abk acm ady afb afh_Latn afr akl_Latn aln amh ang_Latn apc ara arg arq ary arz asm ast avk_Latn awa aze_Latn bak bam_Latn bel bel_Latn ben bho bod bos_Latn bre brx brx_Latn bul bul_Latn cat ceb ces cha che chr chv cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant cor cos crh crh_Latn csb_Latn cym dan deu dsb dtp dws_Latn egl ell enm_Latn epo est eus ewe ext fao fij fin fkv_Latn fra frm_Latn frr fry fuc fuv gan gcf_Latn gil gla gle glg glv gom gos got_Goth grc_Grek grn gsw guj hat hau_Latn haw heb hif_Latn hil hin hnj_Latn hoc hoc_Latn hrv hsb hun hye iba ibo ido ido_Latn ike_Latn ile_Latn ilo ina_Latn ind isl ita izh jav jav_Java jbo jbo_Cyrl jbo_Latn jdt_Cyrl jpn kab kal kan kat kaz_Cyrl kaz_Latn kek_Latn kha khm khm_Latn kin kir_Cyrl kjh kpv krl ksh kum kur_Arab kur_Latn lad lad_Latn lao lat_Latn lav ldn_Latn lfn_Cyrl lfn_Latn lij lin lit liv_Latn lkt lld_Latn lmo ltg ltz lug lzh lzh_Hans mad mah mai mal mar max_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob_Hebr nog non_Latn nov_Latn npi nya oci ori orv_Cyrl oss ota_Arab ota_Latn pag pan_Guru pap pau pdc pes pes_Latn pes_Thaa pms pnb pol por ppl_Latn prg_Latn pus quc qya qya_Latn rap rif_Latn roh rom ron rue run rus sag sah san_Deva scn sco sgs shs_Latn shy_Latn sin sjn_Latn slv sma sme smo sna snd_Arab som spa sqi srp_Cyrl srp_Latn stq sun swe swg swh tah tam tat tat_Arab tat_Latn tel tet tgk_Cyrl tha tir tlh_Latn tly_Latn tmw_Latn toi_Latn ton tpw_Latn tso tuk tuk_Latn tur tvl tyv tzl tzl_Latn udm uig_Arab uig_Cyrl ukr umb urd uzb_Cyrl uzb_Latn vec vie vie_Hani vol_Latn vro war wln wol wuu xal xho yid yor yue yue_Hans yue_Hant zho zho_Hans zho_Hant zlm_Latn zsm_Latn zul zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-enghin.eng.hin | 5.0 | 0.288 |
| newsdev2015-enfi-engfin.eng.fin | 9.3 | 0.418 |
| newsdev2016-enro-engron.eng.ron | 17.2 | 0.488 |
| newsdev2016-entr-engtur.eng.tur | 8.2 | 0.402 |
| newsdev2017-enlv-englav.eng.lav | 12.9 | 0.444 |
| newsdev2017-enzh-engzho.eng.zho | 17.6 | 0.170 |
| newsdev2018-enet-engest.eng.est | 10.9 | 0.423 |
| newsdev2019-engu-engguj.eng.guj | 5.2 | 0.284 |
| newsdev2019-enlt-englit.eng.lit | 11.0 | 0.431 |
| newsdiscussdev2015-enfr-engfra.eng.fra | 22.6 | 0.521 |
| newsdiscusstest2015-enfr-engfra.eng.fra | 25.9 | 0.546 |
| newssyscomb2009-engces.eng.ces | 10.3 | 0.394 |
| newssyscomb2009-engdeu.eng.deu | 13.3 | 0.459 |
| newssyscomb2009-engfra.eng.fra | 21.5 | 0.522 |
| newssyscomb2009-enghun.eng.hun | 8.1 | 0.371 |
| newssyscomb2009-engita.eng.ita | 22.1 | 0.540 |
| newssyscomb2009-engspa.eng.spa | 23.8 | 0.531 |
| news-test2008-engces.eng.ces | 9.0 | 0.376 |
| news-test2008-engdeu.eng.deu | 14.2 | 0.451 |
| news-test2008-engfra.eng.fra | 19.8 | 0.500 |
| news-test2008-engspa.eng.spa | 22.8 | 0.518 |
| newstest2009-engces.eng.ces | 9.8 | 0.392 |
| newstest2009-engdeu.eng.deu | 13.7 | 0.454 |
| newstest2009-engfra.eng.fra | 20.7 | 0.514 |
| newstest2009-enghun.eng.hun | 8.4 | 0.370 |
| newstest2009-engita.eng.ita | 22.4 | 0.538 |
| newstest2009-engspa.eng.spa | 23.5 | 0.532 |
| newstest2010-engces.eng.ces | 10.0 | 0.393 |
| newstest2010-engdeu.eng.deu | 15.2 | 0.463 |
| newstest2010-engfra.eng.fra | 22.0 | 0.524 |
| newstest2010-engspa.eng.spa | 27.2 | 0.556 |
| newstest2011-engces.eng.ces | 10.8 | 0.392 |
| newstest2011-engdeu.eng.deu | 14.2 | 0.449 |
| newstest2011-engfra.eng.fra | 24.3 | 0.544 |
| newstest2011-engspa.eng.spa | 28.3 | 0.559 |
| newstest2012-engces.eng.ces | 9.9 | 0.377 |
| newstest2012-engdeu.eng.deu | 14.3 | 0.449 |
| newstest2012-engfra.eng.fra | 23.2 | 0.530 |
| newstest2012-engrus.eng.rus | 16.0 | 0.463 |
| newstest2012-engspa.eng.spa | 27.8 | 0.555 |
| newstest2013-engces.eng.ces | 11.0 | 0.392 |
| newstest2013-engdeu.eng.deu | 16.4 | 0.469 |
| newstest2013-engfra.eng.fra | 22.6 | 0.515 |
| newstest2013-engrus.eng.rus | 12.1 | 0.414 |
| newstest2013-engspa.eng.spa | 24.9 | 0.532 |
| newstest2014-hien-enghin.eng.hin | 7.2 | 0.311 |
| newstest2015-encs-engces.eng.ces | 10.9 | 0.396 |
| newstest2015-ende-engdeu.eng.deu | 18.3 | 0.490 |
| newstest2015-enfi-engfin.eng.fin | 10.1 | 0.421 |
| newstest2015-enru-engrus.eng.rus | 14.5 | 0.445 |
| newstest2016-encs-engces.eng.ces | 12.2 | 0.408 |
| newstest2016-ende-engdeu.eng.deu | 21.4 | 0.517 |
| newstest2016-enfi-engfin.eng.fin | 11.2 | 0.435 |
| newstest2016-enro-engron.eng.ron | 16.6 | 0.472 |
| newstest2016-enru-engrus.eng.rus | 13.4 | 0.435 |
| newstest2016-entr-engtur.eng.tur | 8.1 | 0.385 |
| newstest2017-encs-engces.eng.ces | 9.6 | 0.377 |
| newstest2017-ende-engdeu.eng.deu | 17.9 | 0.482 |
| newstest2017-enfi-engfin.eng.fin | 11.8 | 0.440 |
| newstest2017-enlv-englav.eng.lav | 9.6 | 0.412 |
| newstest2017-enru-engrus.eng.rus | 14.1 | 0.446 |
| newstest2017-entr-engtur.eng.tur | 8.0 | 0.378 |
| newstest2017-enzh-engzho.eng.zho | 16.8 | 0.175 |
| newstest2018-encs-engces.eng.ces | 9.8 | 0.380 |
| newstest2018-ende-engdeu.eng.deu | 23.8 | 0.536 |
| newstest2018-enet-engest.eng.est | 11.8 | 0.433 |
| newstest2018-enfi-engfin.eng.fin | 7.8 | 0.398 |
| newstest2018-enru-engrus.eng.rus | 12.2 | 0.434 |
| newstest2018-entr-engtur.eng.tur | 7.5 | 0.383 |
| newstest2018-enzh-engzho.eng.zho | 18.3 | 0.179 |
| newstest2019-encs-engces.eng.ces | 10.7 | 0.389 |
| newstest2019-ende-engdeu.eng.deu | 21.0 | 0.512 |
| newstest2019-enfi-engfin.eng.fin | 10.4 | 0.420 |
| newstest2019-engu-engguj.eng.guj | 5.8 | 0.297 |
| newstest2019-enlt-englit.eng.lit | 8.0 | 0.388 |
| newstest2019-enru-engrus.eng.rus | 13.0 | 0.415 |
| newstest2019-enzh-engzho.eng.zho | 15.0 | 0.192 |
| newstestB2016-enfi-engfin.eng.fin | 9.0 | 0.414 |
| newstestB2017-enfi-engfin.eng.fin | 9.5 | 0.415 |
| Tatoeba-test.eng-abk.eng.abk | 4.2 | 0.275 |
| Tatoeba-test.eng-ady.eng.ady | 0.4 | 0.006 |
| Tatoeba-test.eng-afh.eng.afh | 1.0 | 0.058 |
| Tatoeba-test.eng-afr.eng.afr | 47.0 | 0.663 |
| Tatoeba-test.eng-akl.eng.akl | 2.7 | 0.080 |
| Tatoeba-test.eng-amh.eng.amh | 8.5 | 0.455 |
| Tatoeba-test.eng-ang.eng.ang | 6.2 | 0.138 |
| Tatoeba-test.eng-ara.eng.ara | 6.3 | 0.325 |
| Tatoeba-test.eng-arg.eng.arg | 1.5 | 0.107 |
| Tatoeba-test.eng-asm.eng.asm | 2.1 | 0.265 |
| Tatoeba-test.eng-ast.eng.ast | 15.7 | 0.393 |
| Tatoeba-test.eng-avk.eng.avk | 0.2 | 0.095 |
| Tatoeba-test.eng-awa.eng.awa | 0.1 | 0.002 |
| Tatoeba-test.eng-aze.eng.aze | 19.0 | 0.500 |
| Tatoeba-test.eng-bak.eng.bak | 12.7 | 0.379 |
| Tatoeba-test.eng-bam.eng.bam | 8.3 | 0.037 |
| Tatoeba-test.eng-bel.eng.bel | 13.5 | 0.396 |
| Tatoeba-test.eng-ben.eng.ben | 10.0 | 0.383 |
| Tatoeba-test.eng-bho.eng.bho | 0.1 | 0.003 |
| Tatoeba-test.eng-bod.eng.bod | 0.0 | 0.147 |
| Tatoeba-test.eng-bre.eng.bre | 7.6 | 0.275 |
| Tatoeba-test.eng-brx.eng.brx | 0.8 | 0.060 |
| Tatoeba-test.eng-bul.eng.bul | 32.1 | 0.542 |
| Tatoeba-test.eng-cat.eng.cat | 37.0 | 0.595 |
| Tatoeba-test.eng-ceb.eng.ceb | 9.6 | 0.409 |
| Tatoeba-test.eng-ces.eng.ces | 24.0 | 0.475 |
| Tatoeba-test.eng-cha.eng.cha | 3.9 | 0.228 |
| Tatoeba-test.eng-che.eng.che | 0.7 | 0.013 |
| Tatoeba-test.eng-chm.eng.chm | 2.6 | 0.212 |
| Tatoeba-test.eng-chr.eng.chr | 6.0 | 0.190 |
| Tatoeba-test.eng-chv.eng.chv | 6.5 | 0.369 |
| Tatoeba-test.eng-cor.eng.cor | 0.9 | 0.086 |
| Tatoeba-test.eng-cos.eng.cos | 4.2 | 0.174 |
| Tatoeba-test.eng-crh.eng.crh | 9.9 | 0.361 |
| Tatoeba-test.eng-csb.eng.csb | 3.4 | 0.230 |
| Tatoeba-test.eng-cym.eng.cym | 18.0 | 0.418 |
| Tatoeba-test.eng-dan.eng.dan | 42.5 | 0.624 |
| Tatoeba-test.eng-deu.eng.deu | 25.2 | 0.505 |
| Tatoeba-test.eng-dsb.eng.dsb | 0.9 | 0.121 |
| Tatoeba-test.eng-dtp.eng.dtp | 0.3 | 0.084 |
| Tatoeba-test.eng-dws.eng.dws | 0.2 | 0.040 |
| Tatoeba-test.eng-egl.eng.egl | 0.4 | 0.085 |
| Tatoeba-test.eng-ell.eng.ell | 28.7 | 0.543 |
| Tatoeba-test.eng-enm.eng.enm | 3.3 | 0.295 |
| Tatoeba-test.eng-epo.eng.epo | 33.4 | 0.570 |
| Tatoeba-test.eng-est.eng.est | 30.3 | 0.545 |
| Tatoeba-test.eng-eus.eng.eus | 18.5 | 0.486 |
| Tatoeba-test.eng-ewe.eng.ewe | 6.8 | 0.272 |
| Tatoeba-test.eng-ext.eng.ext | 5.0 | 0.228 |
| Tatoeba-test.eng-fao.eng.fao | 5.2 | 0.277 |
| Tatoeba-test.eng-fas.eng.fas | 6.9 | 0.265 |
| Tatoeba-test.eng-fij.eng.fij | 31.5 | 0.365 |
| Tatoeba-test.eng-fin.eng.fin | 18.5 | 0.459 |
| Tatoeba-test.eng-fkv.eng.fkv | 0.9 | 0.132 |
| Tatoeba-test.eng-fra.eng.fra | 31.5 | 0.546 |
| Tatoeba-test.eng-frm.eng.frm | 0.9 | 0.128 |
| Tatoeba-test.eng-frr.eng.frr | 3.0 | 0.025 |
| Tatoeba-test.eng-fry.eng.fry | 14.4 | 0.387 |
| Tatoeba-test.eng-ful.eng.ful | 0.4 | 0.061 |
| Tatoeba-test.eng-gcf.eng.gcf | 0.3 | 0.075 |
| Tatoeba-test.eng-gil.eng.gil | 47.4 | 0.706 |
| Tatoeba-test.eng-gla.eng.gla | 10.9 | 0.341 |
| Tatoeba-test.eng-gle.eng.gle | 26.8 | 0.493 |
| Tatoeba-test.eng-glg.eng.glg | 32.5 | 0.565 |
| Tatoeba-test.eng-glv.eng.glv | 21.5 | 0.395 |
| Tatoeba-test.eng-gos.eng.gos | 0.3 | 0.124 |
| Tatoeba-test.eng-got.eng.got | 0.2 | 0.010 |
| Tatoeba-test.eng-grc.eng.grc | 0.0 | 0.005 |
| Tatoeba-test.eng-grn.eng.grn | 1.5 | 0.129 |
| Tatoeba-test.eng-gsw.eng.gsw | 0.6 | 0.106 |
| Tatoeba-test.eng-guj.eng.guj | 15.4 | 0.347 |
| Tatoeba-test.eng-hat.eng.hat | 31.1 | 0.527 |
| Tatoeba-test.eng-hau.eng.hau | 6.5 | 0.385 |
| Tatoeba-test.eng-haw.eng.haw | 0.2 | 0.066 |
| Tatoeba-test.eng-hbs.eng.hbs | 28.7 | 0.531 |
| Tatoeba-test.eng-heb.eng.heb | 21.3 | 0.443 |
| Tatoeba-test.eng-hif.eng.hif | 2.8 | 0.268 |
| Tatoeba-test.eng-hil.eng.hil | 12.0 | 0.463 |
| Tatoeba-test.eng-hin.eng.hin | 13.0 | 0.401 |
| Tatoeba-test.eng-hmn.eng.hmn | 0.2 | 0.073 |
| Tatoeba-test.eng-hoc.eng.hoc | 0.2 | 0.077 |
| Tatoeba-test.eng-hsb.eng.hsb | 5.7 | 0.308 |
| Tatoeba-test.eng-hun.eng.hun | 17.1 | 0.431 |
| Tatoeba-test.eng-hye.eng.hye | 15.0 | 0.378 |
| Tatoeba-test.eng-iba.eng.iba | 16.0 | 0.437 |
| Tatoeba-test.eng-ibo.eng.ibo | 2.9 | 0.221 |
| Tatoeba-test.eng-ido.eng.ido | 11.5 | 0.403 |
| Tatoeba-test.eng-iku.eng.iku | 2.3 | 0.089 |
| Tatoeba-test.eng-ile.eng.ile | 4.3 | 0.282 |
| Tatoeba-test.eng-ilo.eng.ilo | 26.4 | 0.522 |
| Tatoeba-test.eng-ina.eng.ina | 20.9 | 0.493 |
| Tatoeba-test.eng-isl.eng.isl | 12.5 | 0.375 |
| Tatoeba-test.eng-ita.eng.ita | 33.9 | 0.592 |
| Tatoeba-test.eng-izh.eng.izh | 4.6 | 0.050 |
| Tatoeba-test.eng-jav.eng.jav | 7.8 | 0.328 |
| Tatoeba-test.eng-jbo.eng.jbo | 0.1 | 0.123 |
| Tatoeba-test.eng-jdt.eng.jdt | 6.4 | 0.008 |
| Tatoeba-test.eng-jpn.eng.jpn | 0.0 | 0.000 |
| Tatoeba-test.eng-kab.eng.kab | 5.9 | 0.261 |
| Tatoeba-test.eng-kal.eng.kal | 13.4 | 0.382 |
| Tatoeba-test.eng-kan.eng.kan | 4.8 | 0.358 |
| Tatoeba-test.eng-kat.eng.kat | 1.8 | 0.115 |
| Tatoeba-test.eng-kaz.eng.kaz | 8.8 | 0.354 |
| Tatoeba-test.eng-kek.eng.kek | 3.7 | 0.188 |
| Tatoeba-test.eng-kha.eng.kha | 0.5 | 0.094 |
| Tatoeba-test.eng-khm.eng.khm | 0.4 | 0.243 |
| Tatoeba-test.eng-kin.eng.kin | 5.2 | 0.362 |
| Tatoeba-test.eng-kir.eng.kir | 17.2 | 0.416 |
| Tatoeba-test.eng-kjh.eng.kjh | 0.6 | 0.009 |
| Tatoeba-test.eng-kok.eng.kok | 5.5 | 0.005 |
| Tatoeba-test.eng-kom.eng.kom | 2.4 | 0.012 |
| Tatoeba-test.eng-krl.eng.krl | 2.0 | 0.099 |
| Tatoeba-test.eng-ksh.eng.ksh | 0.4 | 0.074 |
| Tatoeba-test.eng-kum.eng.kum | 0.9 | 0.007 |
| Tatoeba-test.eng-kur.eng.kur | 9.1 | 0.174 |
| Tatoeba-test.eng-lad.eng.lad | 1.2 | 0.154 |
| Tatoeba-test.eng-lah.eng.lah | 0.1 | 0.001 |
| Tatoeba-test.eng-lao.eng.lao | 0.6 | 0.426 |
| Tatoeba-test.eng-lat.eng.lat | 8.2 | 0.366 |
| Tatoeba-test.eng-lav.eng.lav | 20.4 | 0.475 |
| Tatoeba-test.eng-ldn.eng.ldn | 0.3 | 0.059 |
| Tatoeba-test.eng-lfn.eng.lfn | 0.5 | 0.104 |
| Tatoeba-test.eng-lij.eng.lij | 0.2 | 0.094 |
| Tatoeba-test.eng-lin.eng.lin | 1.2 | 0.276 |
| Tatoeba-test.eng-lit.eng.lit | 17.4 | 0.488 |
| Tatoeba-test.eng-liv.eng.liv | 0.3 | 0.039 |
| Tatoeba-test.eng-lkt.eng.lkt | 0.3 | 0.041 |
| Tatoeba-test.eng-lld.eng.lld | 0.1 | 0.083 |
| Tatoeba-test.eng-lmo.eng.lmo | 1.4 | 0.154 |
| Tatoeba-test.eng-ltz.eng.ltz | 19.1 | 0.395 |
| Tatoeba-test.eng-lug.eng.lug | 4.2 | 0.382 |
| Tatoeba-test.eng-mad.eng.mad | 2.1 | 0.075 |
| Tatoeba-test.eng-mah.eng.mah | 9.5 | 0.331 |
| Tatoeba-test.eng-mai.eng.mai | 9.3 | 0.372 |
| Tatoeba-test.eng-mal.eng.mal | 8.3 | 0.437 |
| Tatoeba-test.eng-mar.eng.mar | 13.5 | 0.410 |
| Tatoeba-test.eng-mdf.eng.mdf | 2.3 | 0.008 |
| Tatoeba-test.eng-mfe.eng.mfe | 83.6 | 0.905 |
| Tatoeba-test.eng-mic.eng.mic | 7.6 | 0.214 |
| Tatoeba-test.eng-mkd.eng.mkd | 31.8 | 0.540 |
| Tatoeba-test.eng-mlg.eng.mlg | 31.3 | 0.464 |
| Tatoeba-test.eng-mlt.eng.mlt | 11.7 | 0.427 |
| Tatoeba-test.eng-mnw.eng.mnw | 0.1 | 0.000 |
| Tatoeba-test.eng-moh.eng.moh | 0.6 | 0.067 |
| Tatoeba-test.eng-mon.eng.mon | 8.5 | 0.323 |
| Tatoeba-test.eng-mri.eng.mri | 8.5 | 0.320 |
| Tatoeba-test.eng-msa.eng.msa | 24.5 | 0.498 |
| Tatoeba-test.eng.multi | 22.4 | 0.451 |
| Tatoeba-test.eng-mwl.eng.mwl | 3.8 | 0.169 |
| Tatoeba-test.eng-mya.eng.mya | 0.2 | 0.123 |
| Tatoeba-test.eng-myv.eng.myv | 1.1 | 0.014 |
| Tatoeba-test.eng-nau.eng.nau | 0.6 | 0.109 |
| Tatoeba-test.eng-nav.eng.nav | 1.8 | 0.149 |
| Tatoeba-test.eng-nds.eng.nds | 11.3 | 0.365 |
| Tatoeba-test.eng-nep.eng.nep | 0.5 | 0.004 |
| Tatoeba-test.eng-niu.eng.niu | 34.4 | 0.501 |
| Tatoeba-test.eng-nld.eng.nld | 37.6 | 0.598 |
| Tatoeba-test.eng-nog.eng.nog | 0.2 | 0.010 |
| Tatoeba-test.eng-non.eng.non | 0.2 | 0.096 |
| Tatoeba-test.eng-nor.eng.nor | 36.3 | 0.577 |
| Tatoeba-test.eng-nov.eng.nov | 0.9 | 0.180 |
| Tatoeba-test.eng-nya.eng.nya | 9.8 | 0.524 |
| Tatoeba-test.eng-oci.eng.oci | 6.3 | 0.288 |
| Tatoeba-test.eng-ori.eng.ori | 5.3 | 0.273 |
| Tatoeba-test.eng-orv.eng.orv | 0.2 | 0.007 |
| Tatoeba-test.eng-oss.eng.oss | 3.0 | 0.230 |
| Tatoeba-test.eng-ota.eng.ota | 0.2 | 0.053 |
| Tatoeba-test.eng-pag.eng.pag | 20.2 | 0.513 |
| Tatoeba-test.eng-pan.eng.pan | 6.4 | 0.301 |
| Tatoeba-test.eng-pap.eng.pap | 44.7 | 0.624 |
| Tatoeba-test.eng-pau.eng.pau | 0.8 | 0.098 |
| Tatoeba-test.eng-pdc.eng.pdc | 2.9 | 0.143 |
| Tatoeba-test.eng-pms.eng.pms | 0.6 | 0.124 |
| Tatoeba-test.eng-pol.eng.pol | 22.7 | 0.500 |
| Tatoeba-test.eng-por.eng.por | 31.6 | 0.570 |
| Tatoeba-test.eng-ppl.eng.ppl | 0.5 | 0.085 |
| Tatoeba-test.eng-prg.eng.prg | 0.1 | 0.078 |
| Tatoeba-test.eng-pus.eng.pus | 0.9 | 0.137 |
| Tatoeba-test.eng-quc.eng.quc | 2.7 | 0.255 |
| Tatoeba-test.eng-qya.eng.qya | 0.4 | 0.084 |
| Tatoeba-test.eng-rap.eng.rap | 1.9 | 0.050 |
| Tatoeba-test.eng-rif.eng.rif | 1.3 | 0.102 |
| Tatoeba-test.eng-roh.eng.roh | 1.4 | 0.169 |
| Tatoeba-test.eng-rom.eng.rom | 7.8 | 0.329 |
| Tatoeba-test.eng-ron.eng.ron | 27.0 | 0.530 |
| Tatoeba-test.eng-rue.eng.rue | 0.1 | 0.009 |
| Tatoeba-test.eng-run.eng.run | 9.8 | 0.434 |
| Tatoeba-test.eng-rus.eng.rus | 22.2 | 0.465 |
| Tatoeba-test.eng-sag.eng.sag | 4.8 | 0.155 |
| Tatoeba-test.eng-sah.eng.sah | 0.2 | 0.007 |
| Tatoeba-test.eng-san.eng.san | 1.7 | 0.143 |
| Tatoeba-test.eng-scn.eng.scn | 1.5 | 0.083 |
| Tatoeba-test.eng-sco.eng.sco | 30.3 | 0.514 |
| Tatoeba-test.eng-sgs.eng.sgs | 1.6 | 0.104 |
| Tatoeba-test.eng-shs.eng.shs | 0.7 | 0.049 |
| Tatoeba-test.eng-shy.eng.shy | 0.6 | 0.064 |
| Tatoeba-test.eng-sin.eng.sin | 5.4 | 0.317 |
| Tatoeba-test.eng-sjn.eng.sjn | 0.3 | 0.074 |
| Tatoeba-test.eng-slv.eng.slv | 12.8 | 0.313 |
| Tatoeba-test.eng-sma.eng.sma | 0.8 | 0.063 |
| Tatoeba-test.eng-sme.eng.sme | 13.2 | 0.290 |
| Tatoeba-test.eng-smo.eng.smo | 12.1 | 0.416 |
| Tatoeba-test.eng-sna.eng.sna | 27.1 | 0.533 |
| Tatoeba-test.eng-snd.eng.snd | 6.0 | 0.359 |
| Tatoeba-test.eng-som.eng.som | 16.0 | 0.274 |
| Tatoeba-test.eng-spa.eng.spa | 36.7 | 0.603 |
| Tatoeba-test.eng-sqi.eng.sqi | 32.3 | 0.573 |
| Tatoeba-test.eng-stq.eng.stq | 0.6 | 0.198 |
| Tatoeba-test.eng-sun.eng.sun | 39.0 | 0.447 |
| Tatoeba-test.eng-swa.eng.swa | 1.1 | 0.109 |
| Tatoeba-test.eng-swe.eng.swe | 42.7 | 0.614 |
| Tatoeba-test.eng-swg.eng.swg | 0.6 | 0.118 |
| Tatoeba-test.eng-tah.eng.tah | 12.4 | 0.294 |
| Tatoeba-test.eng-tam.eng.tam | 5.0 | 0.404 |
| Tatoeba-test.eng-tat.eng.tat | 9.9 | 0.326 |
| Tatoeba-test.eng-tel.eng.tel | 4.7 | 0.326 |
| Tatoeba-test.eng-tet.eng.tet | 0.7 | 0.100 |
| Tatoeba-test.eng-tgk.eng.tgk | 5.5 | 0.304 |
| Tatoeba-test.eng-tha.eng.tha | 2.2 | 0.456 |
| Tatoeba-test.eng-tir.eng.tir | 1.5 | 0.197 |
| Tatoeba-test.eng-tlh.eng.tlh | 0.0 | 0.032 |
| Tatoeba-test.eng-tly.eng.tly | 0.3 | 0.061 |
| Tatoeba-test.eng-toi.eng.toi | 8.3 | 0.219 |
| Tatoeba-test.eng-ton.eng.ton | 32.7 | 0.619 |
| Tatoeba-test.eng-tpw.eng.tpw | 1.4 | 0.136 |
| Tatoeba-test.eng-tso.eng.tso | 9.6 | 0.465 |
| Tatoeba-test.eng-tuk.eng.tuk | 9.4 | 0.383 |
| Tatoeba-test.eng-tur.eng.tur | 24.1 | 0.542 |
| Tatoeba-test.eng-tvl.eng.tvl | 8.9 | 0.398 |
| Tatoeba-test.eng-tyv.eng.tyv | 10.4 | 0.249 |
| Tatoeba-test.eng-tzl.eng.tzl | 0.2 | 0.098 |
| Tatoeba-test.eng-udm.eng.udm | 6.5 | 0.212 |
| Tatoeba-test.eng-uig.eng.uig | 2.1 | 0.266 |
| Tatoeba-test.eng-ukr.eng.ukr | 24.3 | 0.479 |
| Tatoeba-test.eng-umb.eng.umb | 4.4 | 0.274 |
| Tatoeba-test.eng-urd.eng.urd | 8.6 | 0.344 |
| Tatoeba-test.eng-uzb.eng.uzb | 6.9 | 0.343 |
| Tatoeba-test.eng-vec.eng.vec | 1.0 | 0.094 |
| Tatoeba-test.eng-vie.eng.vie | 23.2 | 0.420 |
| Tatoeba-test.eng-vol.eng.vol | 0.3 | 0.086 |
| Tatoeba-test.eng-war.eng.war | 11.4 | 0.415 |
| Tatoeba-test.eng-wln.eng.wln | 8.4 | 0.218 |
| Tatoeba-test.eng-wol.eng.wol | 11.5 | 0.252 |
| Tatoeba-test.eng-xal.eng.xal | 0.1 | 0.007 |
| Tatoeba-test.eng-xho.eng.xho | 19.5 | 0.552 |
| Tatoeba-test.eng-yid.eng.yid | 4.0 | 0.256 |
| Tatoeba-test.eng-yor.eng.yor | 8.8 | 0.247 |
| Tatoeba-test.eng-zho.eng.zho | 21.8 | 0.192 |
| Tatoeba-test.eng-zul.eng.zul | 34.3 | 0.655 |
| Tatoeba-test.eng-zza.eng.zza | 0.5 | 0.080 |
### System Info:
- hf_name: eng-mul
- source_languages: eng
- target_languages: mul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-mul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul']
- src_constituents: {'eng'}
- tgt_constituents: {'sjn_Latn', 'cat', 'nan', 'spa', 'ile_Latn', 'pap', 'mwl', 'uzb_Latn', 'mww', 'hil', 'lij', 'avk_Latn', 'lad_Latn', 'lat_Latn', 'bos_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi_Latn', 'awa', 'swg', 'zsm_Latn', 'zho_Hant', 'gcf_Latn', 'uzb_Cyrl', 'isl', 'lfn_Latn', 'shs_Latn', 'nov_Latn', 'bho', 'ltz', 'lzh', 'kur_Latn', 'sun', 'arg', 'pes_Thaa', 'sqi', 'uig_Arab', 'csb_Latn', 'fra', 'hat', 'liv_Latn', 'non_Latn', 'sco', 'cmn_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul_Latn', 'amh', 'lfn_Cyrl', 'eus', 'fkv_Latn', 'tur', 'pus', 'afr', 'brx_Latn', 'nya', 'acm', 'ota_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho_Hans', 'tmw_Latn', 'kjh', 'ota_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh_Hans', 'ara', 'tly_Latn', 'lug', 'brx', 'bul', 'bel', 'vol_Latn', 'kat', 'gan', 'got_Goth', 'vro', 'ext', 'afh_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif_Latn', 'cjy_Hant', 'bre', 'ceb', 'mah', 'nob_Hebr', 'crh_Latn', 'prg_Latn', 'khm', 'ang_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie_Hani', 'arz', 'yue', 'kha', 'san_Deva', 'jbo_Latn', 'gos', 'hau_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig_Cyrl', 'fao', 'mnw', 'zho', 'orv_Cyrl', 'iba', 'bel_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc_Grek', 'tpw_Latn', 'oci', 'mfe', 'sna', 'kir_Cyrl', 'tat_Latn', 'gom', 'ido_Latn', 'sgs', 'pau', 'tgk_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp_Latn', 'wuu', 'dtp', 'jbo_Cyrl', 'tet', 'bod', 'yue_Hans', 'zlm_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif_Latn', 'vie', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina_Latn', 'cjy_Hans', 'jdt_Cyrl', 'gsw', 'glv', 'khm_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd_Arab', 'arq', 'mri', 'kur_Arab', 'por', 'hin', 'shy_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue_Hant', 'kpv', 'tam', 'est', 'frm_Latn', 'hoc_Latn', 'bam_Latn', 'kek_Latn', 'ksh', 'tlh_Latn', 'ltg', 'pan_Guru', 'hnj_Latn', 'cor', 'gle', 'swe', 'lin', 'qya_Latn', 'kum', 'mad', 'cmn_Hant', 'fuv', 'nau', 'mon', 'akl_Latn', 'guj', 'kaz_Latn', 'wln', 'tuk_Latn', 'jav_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws_Latn', 'urd', 'stq', 'tat_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld_Latn', 'tzl_Latn', 'mdf', 'ike_Latn', 'ces', 'ldn_Latn', 'egl', 'heb', 'vec', 'zul', 'max_Latn', 'pes_Latn', 'yid', 'mal', 'nds'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-mul/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: mul
- short_pair: en-mul
- chrF2_score: 0.451
- bleu: 22.4
- brevity_penalty: 0.987
- ref_len: 68724.0
- src_name: English
- tgt_name: Multiple languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: mul
- prefer_old: False
- long_pair: eng-mul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "ca", "es", "os", "eo", "ro", "fy", "cy", "is", "lb", "su", "an", "sq", "fr", "ht", "rm", "cv", "ig", "am", "eu", "tr", "ps", "af", "ny", "ch", "uk", "sl", "lt", "tk", "sg", "ar", "lg", "bg", "be", "ka", "gd", "ja", "si", "br", "mh", "km", "th", "ty", "rw", "te", "mk", "or", "wo", "kl", "mr", "ru", "yo", "hu", "fo", "zh", "ti", "co", "ee", "oc", "sn", "mt", "ts", "pl", "gl", "nb", "bn", "tt", "bo", "lo", "id", "gn", "nv", "hy", "kn", "to", "io", "so", "vi", "da", "fj", "gv", "sm", "nl", "mi", "pt", "hi", "se", "as", "ta", "et", "kw", "ga", "sv", "ln", "na", "mn", "gu", "wa", "lv", "jv", "el", "my", "ba", "it", "hr", "ur", "ce", "nn", "fi", "mg", "rn", "xh", "ab", "de", "cs", "he", "zu", "yi", "ml", "mul"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-mul | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ca",
"es",
"os",
"eo",
"ro",
"fy",
"cy",
"is",
"lb",
"su",
"an",
"sq",
"fr",
"ht",
"rm",
"cv",
"ig",
"am",
"eu",
"tr",
"ps",
"af",
"ny",
"ch",
"uk",
"sl",
"lt",
"tk",
"sg",
"ar",
"lg",
"bg",
"be",
"ka",
"gd",
"ja",
"si",
"br",
"mh",
"km",
"th",
"ty",
"rw",
"te",
"mk",
"or",
"wo",
"kl",
"mr",
"ru",
"yo",
"hu",
"fo",
"zh",
"ti",
"co",
"ee",
"oc",
"sn",
"mt",
"ts",
"pl",
"gl",
"nb",
"bn",
"tt",
"bo",
"lo",
"id",
"gn",
"nv",
"hy",
"kn",
"to",
"io",
"so",
"vi",
"da",
"fj",
"gv",
"sm",
"nl",
"mi",
"pt",
"hi",
"se",
"as",
"ta",
"et",
"kw",
"ga",
"sv",
"ln",
"na",
"mn",
"gu",
"wa",
"lv",
"jv",
"el",
"my",
"ba",
"it",
"hr",
"ur",
"ce",
"nn",
"fi",
"mg",
"rn",
"xh",
"ab",
"de",
"cs",
"he",
"zu",
"yi",
"ml",
"mul",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ca",
"es",
"os",
"eo",
"ro",
"fy",
"cy",
"is",
"lb",
"su",
"an",
"sq",
"fr",
"ht",
"rm",
"cv",
"ig",
"am",
"eu",
"tr",
"ps",
"af",
"ny",
"ch",
"uk",
"sl",
"lt",
"tk",
"sg",
"ar",
"lg",
"bg",
"be",
"ka",
"gd",
"ja",
"si",
"br",
"mh",
"km",
"th",
"ty",
"rw",
"te",
"mk",
"or",
"wo",
"kl",
"mr",
"ru",
"yo",
"hu",
"fo",
"zh",
"ti",
"co",
"ee",
"oc",
"sn",
"mt",
"ts",
"pl",
"gl",
"nb",
"bn",
"tt",
"bo",
"lo",
"id",
"gn",
"nv",
"hy",
"kn",
"to",
"io",
"so",
"vi",
"da",
"fj",
"gv",
"sm",
"nl",
"mi",
"pt",
"hi",
"se",
"as",
"ta",
"et",
"kw",
"ga",
"sv",
"ln",
"na",
"mn",
"gu",
"wa",
"lv",
"jv",
"el",
"my",
"ba",
"it",
"hr",
"ur",
"ce",
"nn",
"fi",
"mg",
"rn",
"xh",
"ab",
"de",
"cs",
"he",
"zu",
"yi",
"ml",
"mul"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ca #es #os #eo #ro #fy #cy #is #lb #su #an #sq #fr #ht #rm #cv #ig #am #eu #tr #ps #af #ny #ch #uk #sl #lt #tk #sg #ar #lg #bg #be #ka #gd #ja #si #br #mh #km #th #ty #rw #te #mk #or #wo #kl #mr #ru #yo #hu #fo #zh #ti #co #ee #oc #sn #mt #ts #pl #gl #nb #bn #tt #bo #lo #id #gn #nv #hy #kn #to #io #so #vi #da #fj #gv #sm #nl #mi #pt #hi #se #as #ta #et #kw #ga #sv #ln #na #mn #gu #wa #lv #jv #el #my #ba #it #hr #ur #ce #nn #fi #mg #rn #xh #ab #de #cs #he #zu #yi #ml #mul #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-mul
* source group: English
* target group: Multiple languages
* OPUS readme: eng-mul
* model: transformer
* source language(s): eng
* target language(s): abk acm ady afb afh\_Latn afr akl\_Latn aln amh ang\_Latn apc ara arg arq ary arz asm ast avk\_Latn awa aze\_Latn bak bam\_Latn bel bel\_Latn ben bho bod bos\_Latn bre brx brx\_Latn bul bul\_Latn cat ceb ces cha che chr chv cjy\_Hans cjy\_Hant cmn cmn\_Hans cmn\_Hant cor cos crh crh\_Latn csb\_Latn cym dan deu dsb dtp dws\_Latn egl ell enm\_Latn epo est eus ewe ext fao fij fin fkv\_Latn fra frm\_Latn frr fry fuc fuv gan gcf\_Latn gil gla gle glg glv gom gos got\_Goth grc\_Grek grn gsw guj hat hau\_Latn haw heb hif\_Latn hil hin hnj\_Latn hoc hoc\_Latn hrv hsb hun hye iba ibo ido ido\_Latn ike\_Latn ile\_Latn ilo ina\_Latn ind isl ita izh jav jav\_Java jbo jbo\_Cyrl jbo\_Latn jdt\_Cyrl jpn kab kal kan kat kaz\_Cyrl kaz\_Latn kek\_Latn kha khm khm\_Latn kin kir\_Cyrl kjh kpv krl ksh kum kur\_Arab kur\_Latn lad lad\_Latn lao lat\_Latn lav ldn\_Latn lfn\_Cyrl lfn\_Latn lij lin lit liv\_Latn lkt lld\_Latn lmo ltg ltz lug lzh lzh\_Hans mad mah mai mal mar max\_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob\_Hebr nog non\_Latn nov\_Latn npi nya oci ori orv\_Cyrl oss ota\_Arab ota\_Latn pag pan\_Guru pap pau pdc pes pes\_Latn pes\_Thaa pms pnb pol por ppl\_Latn prg\_Latn pus quc qya qya\_Latn rap rif\_Latn roh rom ron rue run rus sag sah san\_Deva scn sco sgs shs\_Latn shy\_Latn sin sjn\_Latn slv sma sme smo sna snd\_Arab som spa sqi srp\_Cyrl srp\_Latn stq sun swe swg swh tah tam tat tat\_Arab tat\_Latn tel tet tgk\_Cyrl tha tir tlh\_Latn tly\_Latn tmw\_Latn toi\_Latn ton tpw\_Latn tso tuk tuk\_Latn tur tvl tyv tzl tzl\_Latn udm uig\_Arab uig\_Cyrl ukr umb urd uzb\_Cyrl uzb\_Latn vec vie vie\_Hani vol\_Latn vro war wln wol wuu xal xho yid yor yue yue\_Hans yue\_Hant zho zho\_Hans zho\_Hant zlm\_Latn zsm\_Latn zul zza
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 5.0, chr-F: 0.288
testset: URL, BLEU: 9.3, chr-F: 0.418
testset: URL, BLEU: 17.2, chr-F: 0.488
testset: URL, BLEU: 8.2, chr-F: 0.402
testset: URL, BLEU: 12.9, chr-F: 0.444
testset: URL, BLEU: 17.6, chr-F: 0.170
testset: URL, BLEU: 10.9, chr-F: 0.423
testset: URL, BLEU: 5.2, chr-F: 0.284
testset: URL, BLEU: 11.0, chr-F: 0.431
testset: URL, BLEU: 22.6, chr-F: 0.521
testset: URL, BLEU: 25.9, chr-F: 0.546
testset: URL, BLEU: 10.3, chr-F: 0.394
testset: URL, BLEU: 13.3, chr-F: 0.459
testset: URL, BLEU: 21.5, chr-F: 0.522
testset: URL, BLEU: 8.1, chr-F: 0.371
testset: URL, BLEU: 22.1, chr-F: 0.540
testset: URL, BLEU: 23.8, chr-F: 0.531
testset: URL, BLEU: 9.0, chr-F: 0.376
testset: URL, BLEU: 14.2, chr-F: 0.451
testset: URL, BLEU: 19.8, chr-F: 0.500
testset: URL, BLEU: 22.8, chr-F: 0.518
testset: URL, BLEU: 9.8, chr-F: 0.392
testset: URL, BLEU: 13.7, chr-F: 0.454
testset: URL, BLEU: 20.7, chr-F: 0.514
testset: URL, BLEU: 8.4, chr-F: 0.370
testset: URL, BLEU: 22.4, chr-F: 0.538
testset: URL, BLEU: 23.5, chr-F: 0.532
testset: URL, BLEU: 10.0, chr-F: 0.393
testset: URL, BLEU: 15.2, chr-F: 0.463
testset: URL, BLEU: 22.0, chr-F: 0.524
testset: URL, BLEU: 27.2, chr-F: 0.556
testset: URL, BLEU: 10.8, chr-F: 0.392
testset: URL, BLEU: 14.2, chr-F: 0.449
testset: URL, BLEU: 24.3, chr-F: 0.544
testset: URL, BLEU: 28.3, chr-F: 0.559
testset: URL, BLEU: 9.9, chr-F: 0.377
testset: URL, BLEU: 14.3, chr-F: 0.449
testset: URL, BLEU: 23.2, chr-F: 0.530
testset: URL, BLEU: 16.0, chr-F: 0.463
testset: URL, BLEU: 27.8, chr-F: 0.555
testset: URL, BLEU: 11.0, chr-F: 0.392
testset: URL, BLEU: 16.4, chr-F: 0.469
testset: URL, BLEU: 22.6, chr-F: 0.515
testset: URL, BLEU: 12.1, chr-F: 0.414
testset: URL, BLEU: 24.9, chr-F: 0.532
testset: URL, BLEU: 7.2, chr-F: 0.311
testset: URL, BLEU: 10.9, chr-F: 0.396
testset: URL, BLEU: 18.3, chr-F: 0.490
testset: URL, BLEU: 10.1, chr-F: 0.421
testset: URL, BLEU: 14.5, chr-F: 0.445
testset: URL, BLEU: 12.2, chr-F: 0.408
testset: URL, BLEU: 21.4, chr-F: 0.517
testset: URL, BLEU: 11.2, chr-F: 0.435
testset: URL, BLEU: 16.6, chr-F: 0.472
testset: URL, BLEU: 13.4, chr-F: 0.435
testset: URL, BLEU: 8.1, chr-F: 0.385
testset: URL, BLEU: 9.6, chr-F: 0.377
testset: URL, BLEU: 17.9, chr-F: 0.482
testset: URL, BLEU: 11.8, chr-F: 0.440
testset: URL, BLEU: 9.6, chr-F: 0.412
testset: URL, BLEU: 14.1, chr-F: 0.446
testset: URL, BLEU: 8.0, chr-F: 0.378
testset: URL, BLEU: 16.8, chr-F: 0.175
testset: URL, BLEU: 9.8, chr-F: 0.380
testset: URL, BLEU: 23.8, chr-F: 0.536
testset: URL, BLEU: 11.8, chr-F: 0.433
testset: URL, BLEU: 7.8, chr-F: 0.398
testset: URL, BLEU: 12.2, chr-F: 0.434
testset: URL, BLEU: 7.5, chr-F: 0.383
testset: URL, BLEU: 18.3, chr-F: 0.179
testset: URL, BLEU: 10.7, chr-F: 0.389
testset: URL, BLEU: 21.0, chr-F: 0.512
testset: URL, BLEU: 10.4, chr-F: 0.420
testset: URL, BLEU: 5.8, chr-F: 0.297
testset: URL, BLEU: 8.0, chr-F: 0.388
testset: URL, BLEU: 13.0, chr-F: 0.415
testset: URL, BLEU: 15.0, chr-F: 0.192
testset: URL, BLEU: 9.0, chr-F: 0.414
testset: URL, BLEU: 9.5, chr-F: 0.415
testset: URL, BLEU: 4.2, chr-F: 0.275
testset: URL, BLEU: 0.4, chr-F: 0.006
testset: URL, BLEU: 1.0, chr-F: 0.058
testset: URL, BLEU: 47.0, chr-F: 0.663
testset: URL, BLEU: 2.7, chr-F: 0.080
testset: URL, BLEU: 8.5, chr-F: 0.455
testset: URL, BLEU: 6.2, chr-F: 0.138
testset: URL, BLEU: 6.3, chr-F: 0.325
testset: URL, BLEU: 1.5, chr-F: 0.107
testset: URL, BLEU: 2.1, chr-F: 0.265
testset: URL, BLEU: 15.7, chr-F: 0.393
testset: URL, BLEU: 0.2, chr-F: 0.095
testset: URL, BLEU: 0.1, chr-F: 0.002
testset: URL, BLEU: 19.0, chr-F: 0.500
testset: URL, BLEU: 12.7, chr-F: 0.379
testset: URL, BLEU: 8.3, chr-F: 0.037
testset: URL, BLEU: 13.5, chr-F: 0.396
testset: URL, BLEU: 10.0, chr-F: 0.383
testset: URL, BLEU: 0.1, chr-F: 0.003
testset: URL, BLEU: 0.0, chr-F: 0.147
testset: URL, BLEU: 7.6, chr-F: 0.275
testset: URL, BLEU: 0.8, chr-F: 0.060
testset: URL, BLEU: 32.1, chr-F: 0.542
testset: URL, BLEU: 37.0, chr-F: 0.595
testset: URL, BLEU: 9.6, chr-F: 0.409
testset: URL, BLEU: 24.0, chr-F: 0.475
testset: URL, BLEU: 3.9, chr-F: 0.228
testset: URL, BLEU: 0.7, chr-F: 0.013
testset: URL, BLEU: 2.6, chr-F: 0.212
testset: URL, BLEU: 6.0, chr-F: 0.190
testset: URL, BLEU: 6.5, chr-F: 0.369
testset: URL, BLEU: 0.9, chr-F: 0.086
testset: URL, BLEU: 4.2, chr-F: 0.174
testset: URL, BLEU: 9.9, chr-F: 0.361
testset: URL, BLEU: 3.4, chr-F: 0.230
testset: URL, BLEU: 18.0, chr-F: 0.418
testset: URL, BLEU: 42.5, chr-F: 0.624
testset: URL, BLEU: 25.2, chr-F: 0.505
testset: URL, BLEU: 0.9, chr-F: 0.121
testset: URL, BLEU: 0.3, chr-F: 0.084
testset: URL, BLEU: 0.2, chr-F: 0.040
testset: URL, BLEU: 0.4, chr-F: 0.085
testset: URL, BLEU: 28.7, chr-F: 0.543
testset: URL, BLEU: 3.3, chr-F: 0.295
testset: URL, BLEU: 33.4, chr-F: 0.570
testset: URL, BLEU: 30.3, chr-F: 0.545
testset: URL, BLEU: 18.5, chr-F: 0.486
testset: URL, BLEU: 6.8, chr-F: 0.272
testset: URL, BLEU: 5.0, chr-F: 0.228
testset: URL, BLEU: 5.2, chr-F: 0.277
testset: URL, BLEU: 6.9, chr-F: 0.265
testset: URL, BLEU: 31.5, chr-F: 0.365
testset: URL, BLEU: 18.5, chr-F: 0.459
testset: URL, BLEU: 0.9, chr-F: 0.132
testset: URL, BLEU: 31.5, chr-F: 0.546
testset: URL, BLEU: 0.9, chr-F: 0.128
testset: URL, BLEU: 3.0, chr-F: 0.025
testset: URL, BLEU: 14.4, chr-F: 0.387
testset: URL, BLEU: 0.4, chr-F: 0.061
testset: URL, BLEU: 0.3, chr-F: 0.075
testset: URL, BLEU: 47.4, chr-F: 0.706
testset: URL, BLEU: 10.9, chr-F: 0.341
testset: URL, BLEU: 26.8, chr-F: 0.493
testset: URL, BLEU: 32.5, chr-F: 0.565
testset: URL, BLEU: 21.5, chr-F: 0.395
testset: URL, BLEU: 0.3, chr-F: 0.124
testset: URL, BLEU: 0.2, chr-F: 0.010
testset: URL, BLEU: 0.0, chr-F: 0.005
testset: URL, BLEU: 1.5, chr-F: 0.129
testset: URL, BLEU: 0.6, chr-F: 0.106
testset: URL, BLEU: 15.4, chr-F: 0.347
testset: URL, BLEU: 31.1, chr-F: 0.527
testset: URL, BLEU: 6.5, chr-F: 0.385
testset: URL, BLEU: 0.2, chr-F: 0.066
testset: URL, BLEU: 28.7, chr-F: 0.531
testset: URL, BLEU: 21.3, chr-F: 0.443
testset: URL, BLEU: 2.8, chr-F: 0.268
testset: URL, BLEU: 12.0, chr-F: 0.463
testset: URL, BLEU: 13.0, chr-F: 0.401
testset: URL, BLEU: 0.2, chr-F: 0.073
testset: URL, BLEU: 0.2, chr-F: 0.077
testset: URL, BLEU: 5.7, chr-F: 0.308
testset: URL, BLEU: 17.1, chr-F: 0.431
testset: URL, BLEU: 15.0, chr-F: 0.378
testset: URL, BLEU: 16.0, chr-F: 0.437
testset: URL, BLEU: 2.9, chr-F: 0.221
testset: URL, BLEU: 11.5, chr-F: 0.403
testset: URL, BLEU: 2.3, chr-F: 0.089
testset: URL, BLEU: 4.3, chr-F: 0.282
testset: URL, BLEU: 26.4, chr-F: 0.522
testset: URL, BLEU: 20.9, chr-F: 0.493
testset: URL, BLEU: 12.5, chr-F: 0.375
testset: URL, BLEU: 33.9, chr-F: 0.592
testset: URL, BLEU: 4.6, chr-F: 0.050
testset: URL, BLEU: 7.8, chr-F: 0.328
testset: URL, BLEU: 0.1, chr-F: 0.123
testset: URL, BLEU: 6.4, chr-F: 0.008
testset: URL, BLEU: 0.0, chr-F: 0.000
testset: URL, BLEU: 5.9, chr-F: 0.261
testset: URL, BLEU: 13.4, chr-F: 0.382
testset: URL, BLEU: 4.8, chr-F: 0.358
testset: URL, BLEU: 1.8, chr-F: 0.115
testset: URL, BLEU: 8.8, chr-F: 0.354
testset: URL, BLEU: 3.7, chr-F: 0.188
testset: URL, BLEU: 0.5, chr-F: 0.094
testset: URL, BLEU: 0.4, chr-F: 0.243
testset: URL, BLEU: 5.2, chr-F: 0.362
testset: URL, BLEU: 17.2, chr-F: 0.416
testset: URL, BLEU: 0.6, chr-F: 0.009
testset: URL, BLEU: 5.5, chr-F: 0.005
testset: URL, BLEU: 2.4, chr-F: 0.012
testset: URL, BLEU: 2.0, chr-F: 0.099
testset: URL, BLEU: 0.4, chr-F: 0.074
testset: URL, BLEU: 0.9, chr-F: 0.007
testset: URL, BLEU: 9.1, chr-F: 0.174
testset: URL, BLEU: 1.2, chr-F: 0.154
testset: URL, BLEU: 0.1, chr-F: 0.001
testset: URL, BLEU: 0.6, chr-F: 0.426
testset: URL, BLEU: 8.2, chr-F: 0.366
testset: URL, BLEU: 20.4, chr-F: 0.475
testset: URL, BLEU: 0.3, chr-F: 0.059
testset: URL, BLEU: 0.5, chr-F: 0.104
testset: URL, BLEU: 0.2, chr-F: 0.094
testset: URL, BLEU: 1.2, chr-F: 0.276
testset: URL, BLEU: 17.4, chr-F: 0.488
testset: URL, BLEU: 0.3, chr-F: 0.039
testset: URL, BLEU: 0.3, chr-F: 0.041
testset: URL, BLEU: 0.1, chr-F: 0.083
testset: URL, BLEU: 1.4, chr-F: 0.154
testset: URL, BLEU: 19.1, chr-F: 0.395
testset: URL, BLEU: 4.2, chr-F: 0.382
testset: URL, BLEU: 2.1, chr-F: 0.075
testset: URL, BLEU: 9.5, chr-F: 0.331
testset: URL, BLEU: 9.3, chr-F: 0.372
testset: URL, BLEU: 8.3, chr-F: 0.437
testset: URL, BLEU: 13.5, chr-F: 0.410
testset: URL, BLEU: 2.3, chr-F: 0.008
testset: URL, BLEU: 83.6, chr-F: 0.905
testset: URL, BLEU: 7.6, chr-F: 0.214
testset: URL, BLEU: 31.8, chr-F: 0.540
testset: URL, BLEU: 31.3, chr-F: 0.464
testset: URL, BLEU: 11.7, chr-F: 0.427
testset: URL, BLEU: 0.1, chr-F: 0.000
testset: URL, BLEU: 0.6, chr-F: 0.067
testset: URL, BLEU: 8.5, chr-F: 0.323
testset: URL, BLEU: 8.5, chr-F: 0.320
testset: URL, BLEU: 24.5, chr-F: 0.498
testset: URL, BLEU: 22.4, chr-F: 0.451
testset: URL, BLEU: 3.8, chr-F: 0.169
testset: URL, BLEU: 0.2, chr-F: 0.123
testset: URL, BLEU: 1.1, chr-F: 0.014
testset: URL, BLEU: 0.6, chr-F: 0.109
testset: URL, BLEU: 1.8, chr-F: 0.149
testset: URL, BLEU: 11.3, chr-F: 0.365
testset: URL, BLEU: 0.5, chr-F: 0.004
testset: URL, BLEU: 34.4, chr-F: 0.501
testset: URL, BLEU: 37.6, chr-F: 0.598
testset: URL, BLEU: 0.2, chr-F: 0.010
testset: URL, BLEU: 0.2, chr-F: 0.096
testset: URL, BLEU: 36.3, chr-F: 0.577
testset: URL, BLEU: 0.9, chr-F: 0.180
testset: URL, BLEU: 9.8, chr-F: 0.524
testset: URL, BLEU: 6.3, chr-F: 0.288
testset: URL, BLEU: 5.3, chr-F: 0.273
testset: URL, BLEU: 0.2, chr-F: 0.007
testset: URL, BLEU: 3.0, chr-F: 0.230
testset: URL, BLEU: 0.2, chr-F: 0.053
testset: URL, BLEU: 20.2, chr-F: 0.513
testset: URL, BLEU: 6.4, chr-F: 0.301
testset: URL, BLEU: 44.7, chr-F: 0.624
testset: URL, BLEU: 0.8, chr-F: 0.098
testset: URL, BLEU: 2.9, chr-F: 0.143
testset: URL, BLEU: 0.6, chr-F: 0.124
testset: URL, BLEU: 22.7, chr-F: 0.500
testset: URL, BLEU: 31.6, chr-F: 0.570
testset: URL, BLEU: 0.5, chr-F: 0.085
testset: URL, BLEU: 0.1, chr-F: 0.078
testset: URL, BLEU: 0.9, chr-F: 0.137
testset: URL, BLEU: 2.7, chr-F: 0.255
testset: URL, BLEU: 0.4, chr-F: 0.084
testset: URL, BLEU: 1.9, chr-F: 0.050
testset: URL, BLEU: 1.3, chr-F: 0.102
testset: URL, BLEU: 1.4, chr-F: 0.169
testset: URL, BLEU: 7.8, chr-F: 0.329
testset: URL, BLEU: 27.0, chr-F: 0.530
testset: URL, BLEU: 0.1, chr-F: 0.009
testset: URL, BLEU: 9.8, chr-F: 0.434
testset: URL, BLEU: 22.2, chr-F: 0.465
testset: URL, BLEU: 4.8, chr-F: 0.155
testset: URL, BLEU: 0.2, chr-F: 0.007
testset: URL, BLEU: 1.7, chr-F: 0.143
testset: URL, BLEU: 1.5, chr-F: 0.083
testset: URL, BLEU: 30.3, chr-F: 0.514
testset: URL, BLEU: 1.6, chr-F: 0.104
testset: URL, BLEU: 0.7, chr-F: 0.049
testset: URL, BLEU: 0.6, chr-F: 0.064
testset: URL, BLEU: 5.4, chr-F: 0.317
testset: URL, BLEU: 0.3, chr-F: 0.074
testset: URL, BLEU: 12.8, chr-F: 0.313
testset: URL, BLEU: 0.8, chr-F: 0.063
testset: URL, BLEU: 13.2, chr-F: 0.290
testset: URL, BLEU: 12.1, chr-F: 0.416
testset: URL, BLEU: 27.1, chr-F: 0.533
testset: URL, BLEU: 6.0, chr-F: 0.359
testset: URL, BLEU: 16.0, chr-F: 0.274
testset: URL, BLEU: 36.7, chr-F: 0.603
testset: URL, BLEU: 32.3, chr-F: 0.573
testset: URL, BLEU: 0.6, chr-F: 0.198
testset: URL, BLEU: 39.0, chr-F: 0.447
testset: URL, BLEU: 1.1, chr-F: 0.109
testset: URL, BLEU: 42.7, chr-F: 0.614
testset: URL, BLEU: 0.6, chr-F: 0.118
testset: URL, BLEU: 12.4, chr-F: 0.294
testset: URL, BLEU: 5.0, chr-F: 0.404
testset: URL, BLEU: 9.9, chr-F: 0.326
testset: URL, BLEU: 4.7, chr-F: 0.326
testset: URL, BLEU: 0.7, chr-F: 0.100
testset: URL, BLEU: 5.5, chr-F: 0.304
testset: URL, BLEU: 2.2, chr-F: 0.456
testset: URL, BLEU: 1.5, chr-F: 0.197
testset: URL, BLEU: 0.0, chr-F: 0.032
testset: URL, BLEU: 0.3, chr-F: 0.061
testset: URL, BLEU: 8.3, chr-F: 0.219
testset: URL, BLEU: 32.7, chr-F: 0.619
testset: URL, BLEU: 1.4, chr-F: 0.136
testset: URL, BLEU: 9.6, chr-F: 0.465
testset: URL, BLEU: 9.4, chr-F: 0.383
testset: URL, BLEU: 24.1, chr-F: 0.542
testset: URL, BLEU: 8.9, chr-F: 0.398
testset: URL, BLEU: 10.4, chr-F: 0.249
testset: URL, BLEU: 0.2, chr-F: 0.098
testset: URL, BLEU: 6.5, chr-F: 0.212
testset: URL, BLEU: 2.1, chr-F: 0.266
testset: URL, BLEU: 24.3, chr-F: 0.479
testset: URL, BLEU: 4.4, chr-F: 0.274
testset: URL, BLEU: 8.6, chr-F: 0.344
testset: URL, BLEU: 6.9, chr-F: 0.343
testset: URL, BLEU: 1.0, chr-F: 0.094
testset: URL, BLEU: 23.2, chr-F: 0.420
testset: URL, BLEU: 0.3, chr-F: 0.086
testset: URL, BLEU: 11.4, chr-F: 0.415
testset: URL, BLEU: 8.4, chr-F: 0.218
testset: URL, BLEU: 11.5, chr-F: 0.252
testset: URL, BLEU: 0.1, chr-F: 0.007
testset: URL, BLEU: 19.5, chr-F: 0.552
testset: URL, BLEU: 4.0, chr-F: 0.256
testset: URL, BLEU: 8.8, chr-F: 0.247
testset: URL, BLEU: 21.8, chr-F: 0.192
testset: URL, BLEU: 34.3, chr-F: 0.655
testset: URL, BLEU: 0.5, chr-F: 0.080
### System Info:
* hf\_name: eng-mul
* source\_languages: eng
* target\_languages: mul
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul']
* src\_constituents: {'eng'}
* tgt\_constituents: {'sjn\_Latn', 'cat', 'nan', 'spa', 'ile\_Latn', 'pap', 'mwl', 'uzb\_Latn', 'mww', 'hil', 'lij', 'avk\_Latn', 'lad\_Latn', 'lat\_Latn', 'bos\_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi\_Latn', 'awa', 'swg', 'zsm\_Latn', 'zho\_Hant', 'gcf\_Latn', 'uzb\_Cyrl', 'isl', 'lfn\_Latn', 'shs\_Latn', 'nov\_Latn', 'bho', 'ltz', 'lzh', 'kur\_Latn', 'sun', 'arg', 'pes\_Thaa', 'sqi', 'uig\_Arab', 'csb\_Latn', 'fra', 'hat', 'liv\_Latn', 'non\_Latn', 'sco', 'cmn\_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul\_Latn', 'amh', 'lfn\_Cyrl', 'eus', 'fkv\_Latn', 'tur', 'pus', 'afr', 'brx\_Latn', 'nya', 'acm', 'ota\_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho\_Hans', 'tmw\_Latn', 'kjh', 'ota\_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh\_Hans', 'ara', 'tly\_Latn', 'lug', 'brx', 'bul', 'bel', 'vol\_Latn', 'kat', 'gan', 'got\_Goth', 'vro', 'ext', 'afh\_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif\_Latn', 'cjy\_Hant', 'bre', 'ceb', 'mah', 'nob\_Hebr', 'crh\_Latn', 'prg\_Latn', 'khm', 'ang\_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze\_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie\_Hani', 'arz', 'yue', 'kha', 'san\_Deva', 'jbo\_Latn', 'gos', 'hau\_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig\_Cyrl', 'fao', 'mnw', 'zho', 'orv\_Cyrl', 'iba', 'bel\_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc\_Grek', 'tpw\_Latn', 'oci', 'mfe', 'sna', 'kir\_Cyrl', 'tat\_Latn', 'gom', 'ido\_Latn', 'sgs', 'pau', 'tgk\_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp\_Latn', 'wuu', 'dtp', 'jbo\_Cyrl', 'tet', 'bod', 'yue\_Hans', 'zlm\_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz\_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif\_Latn', 'vie', 'enm\_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina\_Latn', 'cjy\_Hans', 'jdt\_Cyrl', 'gsw', 'glv', 'khm\_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd\_Arab', 'arq', 'mri', 'kur\_Arab', 'por', 'hin', 'shy\_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue\_Hant', 'kpv', 'tam', 'est', 'frm\_Latn', 'hoc\_Latn', 'bam\_Latn', 'kek\_Latn', 'ksh', 'tlh\_Latn', 'ltg', 'pan\_Guru', 'hnj\_Latn', 'cor', 'gle', 'swe', 'lin', 'qya\_Latn', 'kum', 'mad', 'cmn\_Hant', 'fuv', 'nau', 'mon', 'akl\_Latn', 'guj', 'kaz\_Latn', 'wln', 'tuk\_Latn', 'jav\_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws\_Latn', 'urd', 'stq', 'tat\_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl\_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld\_Latn', 'tzl\_Latn', 'mdf', 'ike\_Latn', 'ces', 'ldn\_Latn', 'egl', 'heb', 'vec', 'zul', 'max\_Latn', 'pes\_Latn', 'yid', 'mal', 'nds'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: mul
* short\_pair: en-mul
* chrF2\_score: 0.451
* bleu: 22.4
* brevity\_penalty: 0.987
* ref\_len: 68724.0
* src\_name: English
* tgt\_name: Multiple languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: mul
* prefer\_old: False
* long\_pair: eng-mul
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-mul\n\n\n* source group: English\n* target group: Multiple languages\n* OPUS readme: eng-mul\n* model: transformer\n* source language(s): eng\n* target language(s): abk acm ady afb afh\\_Latn afr akl\\_Latn aln amh ang\\_Latn apc ara arg arq ary arz asm ast avk\\_Latn awa aze\\_Latn bak bam\\_Latn bel bel\\_Latn ben bho bod bos\\_Latn bre brx brx\\_Latn bul bul\\_Latn cat ceb ces cha che chr chv cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant cor cos crh crh\\_Latn csb\\_Latn cym dan deu dsb dtp dws\\_Latn egl ell enm\\_Latn epo est eus ewe ext fao fij fin fkv\\_Latn fra frm\\_Latn frr fry fuc fuv gan gcf\\_Latn gil gla gle glg glv gom gos got\\_Goth grc\\_Grek grn gsw guj hat hau\\_Latn haw heb hif\\_Latn hil hin hnj\\_Latn hoc hoc\\_Latn hrv hsb hun hye iba ibo ido ido\\_Latn ike\\_Latn ile\\_Latn ilo ina\\_Latn ind isl ita izh jav jav\\_Java jbo jbo\\_Cyrl jbo\\_Latn jdt\\_Cyrl jpn kab kal kan kat kaz\\_Cyrl kaz\\_Latn kek\\_Latn kha khm khm\\_Latn kin kir\\_Cyrl kjh kpv krl ksh kum kur\\_Arab kur\\_Latn lad lad\\_Latn lao lat\\_Latn lav ldn\\_Latn lfn\\_Cyrl lfn\\_Latn lij lin lit liv\\_Latn lkt lld\\_Latn lmo ltg ltz lug lzh lzh\\_Hans mad mah mai mal mar max\\_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob\\_Hebr nog non\\_Latn nov\\_Latn npi nya oci ori orv\\_Cyrl oss ota\\_Arab ota\\_Latn pag pan\\_Guru pap pau pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por ppl\\_Latn prg\\_Latn pus quc qya qya\\_Latn rap rif\\_Latn roh rom ron rue run rus sag sah san\\_Deva scn sco sgs shs\\_Latn shy\\_Latn sin sjn\\_Latn slv sma sme smo sna snd\\_Arab som spa sqi srp\\_Cyrl srp\\_Latn stq sun swe swg swh tah tam tat tat\\_Arab tat\\_Latn tel tet tgk\\_Cyrl tha tir tlh\\_Latn tly\\_Latn tmw\\_Latn toi\\_Latn ton tpw\\_Latn tso tuk tuk\\_Latn tur tvl tyv tzl tzl\\_Latn udm uig\\_Arab uig\\_Cyrl ukr umb urd uzb\\_Cyrl uzb\\_Latn vec vie vie\\_Hani vol\\_Latn vro war wln wol wuu xal xho yid yor yue yue\\_Hans yue\\_Hant zho zho\\_Hans zho\\_Hant zlm\\_Latn zsm\\_Latn zul zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 5.0, chr-F: 0.288\ntestset: URL, BLEU: 9.3, chr-F: 0.418\ntestset: URL, BLEU: 17.2, chr-F: 0.488\ntestset: URL, BLEU: 8.2, chr-F: 0.402\ntestset: URL, BLEU: 12.9, chr-F: 0.444\ntestset: URL, BLEU: 17.6, chr-F: 0.170\ntestset: URL, BLEU: 10.9, chr-F: 0.423\ntestset: URL, BLEU: 5.2, chr-F: 0.284\ntestset: URL, BLEU: 11.0, chr-F: 0.431\ntestset: URL, BLEU: 22.6, chr-F: 0.521\ntestset: URL, BLEU: 25.9, chr-F: 0.546\ntestset: URL, BLEU: 10.3, chr-F: 0.394\ntestset: URL, BLEU: 13.3, chr-F: 0.459\ntestset: URL, BLEU: 21.5, chr-F: 0.522\ntestset: URL, BLEU: 8.1, chr-F: 0.371\ntestset: URL, BLEU: 22.1, chr-F: 0.540\ntestset: URL, BLEU: 23.8, chr-F: 0.531\ntestset: URL, BLEU: 9.0, chr-F: 0.376\ntestset: URL, BLEU: 14.2, chr-F: 0.451\ntestset: URL, BLEU: 19.8, chr-F: 0.500\ntestset: URL, BLEU: 22.8, chr-F: 0.518\ntestset: URL, BLEU: 9.8, chr-F: 0.392\ntestset: URL, BLEU: 13.7, chr-F: 0.454\ntestset: URL, BLEU: 20.7, chr-F: 0.514\ntestset: URL, BLEU: 8.4, chr-F: 0.370\ntestset: URL, BLEU: 22.4, chr-F: 0.538\ntestset: URL, BLEU: 23.5, chr-F: 0.532\ntestset: URL, BLEU: 10.0, chr-F: 0.393\ntestset: URL, BLEU: 15.2, chr-F: 0.463\ntestset: URL, BLEU: 22.0, chr-F: 0.524\ntestset: URL, BLEU: 27.2, chr-F: 0.556\ntestset: URL, BLEU: 10.8, chr-F: 0.392\ntestset: URL, BLEU: 14.2, chr-F: 0.449\ntestset: URL, BLEU: 24.3, chr-F: 0.544\ntestset: URL, BLEU: 28.3, chr-F: 0.559\ntestset: URL, BLEU: 9.9, chr-F: 0.377\ntestset: URL, BLEU: 14.3, chr-F: 0.449\ntestset: URL, BLEU: 23.2, chr-F: 0.530\ntestset: URL, BLEU: 16.0, chr-F: 0.463\ntestset: URL, BLEU: 27.8, chr-F: 0.555\ntestset: URL, BLEU: 11.0, chr-F: 0.392\ntestset: URL, BLEU: 16.4, chr-F: 0.469\ntestset: URL, BLEU: 22.6, chr-F: 0.515\ntestset: URL, BLEU: 12.1, chr-F: 0.414\ntestset: URL, BLEU: 24.9, chr-F: 0.532\ntestset: URL, BLEU: 7.2, chr-F: 0.311\ntestset: URL, BLEU: 10.9, chr-F: 0.396\ntestset: URL, BLEU: 18.3, chr-F: 0.490\ntestset: URL, BLEU: 10.1, chr-F: 0.421\ntestset: URL, BLEU: 14.5, chr-F: 0.445\ntestset: URL, BLEU: 12.2, chr-F: 0.408\ntestset: URL, BLEU: 21.4, chr-F: 0.517\ntestset: URL, BLEU: 11.2, chr-F: 0.435\ntestset: URL, BLEU: 16.6, chr-F: 0.472\ntestset: URL, BLEU: 13.4, chr-F: 0.435\ntestset: URL, BLEU: 8.1, chr-F: 0.385\ntestset: URL, BLEU: 9.6, chr-F: 0.377\ntestset: URL, BLEU: 17.9, chr-F: 0.482\ntestset: URL, BLEU: 11.8, chr-F: 0.440\ntestset: URL, BLEU: 9.6, chr-F: 0.412\ntestset: URL, BLEU: 14.1, chr-F: 0.446\ntestset: URL, BLEU: 8.0, chr-F: 0.378\ntestset: URL, BLEU: 16.8, chr-F: 0.175\ntestset: URL, BLEU: 9.8, chr-F: 0.380\ntestset: URL, BLEU: 23.8, chr-F: 0.536\ntestset: URL, BLEU: 11.8, chr-F: 0.433\ntestset: URL, BLEU: 7.8, chr-F: 0.398\ntestset: URL, BLEU: 12.2, chr-F: 0.434\ntestset: URL, BLEU: 7.5, chr-F: 0.383\ntestset: URL, BLEU: 18.3, chr-F: 0.179\ntestset: URL, BLEU: 10.7, chr-F: 0.389\ntestset: URL, BLEU: 21.0, chr-F: 0.512\ntestset: URL, BLEU: 10.4, chr-F: 0.420\ntestset: URL, BLEU: 5.8, chr-F: 0.297\ntestset: URL, BLEU: 8.0, chr-F: 0.388\ntestset: URL, BLEU: 13.0, chr-F: 0.415\ntestset: URL, BLEU: 15.0, chr-F: 0.192\ntestset: URL, BLEU: 9.0, chr-F: 0.414\ntestset: URL, BLEU: 9.5, chr-F: 0.415\ntestset: URL, BLEU: 4.2, chr-F: 0.275\ntestset: URL, BLEU: 0.4, chr-F: 0.006\ntestset: URL, BLEU: 1.0, chr-F: 0.058\ntestset: URL, BLEU: 47.0, chr-F: 0.663\ntestset: URL, BLEU: 2.7, chr-F: 0.080\ntestset: URL, BLEU: 8.5, chr-F: 0.455\ntestset: URL, BLEU: 6.2, chr-F: 0.138\ntestset: URL, BLEU: 6.3, chr-F: 0.325\ntestset: URL, BLEU: 1.5, chr-F: 0.107\ntestset: URL, BLEU: 2.1, chr-F: 0.265\ntestset: URL, BLEU: 15.7, chr-F: 0.393\ntestset: URL, BLEU: 0.2, chr-F: 0.095\ntestset: URL, BLEU: 0.1, chr-F: 0.002\ntestset: URL, BLEU: 19.0, chr-F: 0.500\ntestset: URL, BLEU: 12.7, chr-F: 0.379\ntestset: URL, BLEU: 8.3, chr-F: 0.037\ntestset: URL, BLEU: 13.5, chr-F: 0.396\ntestset: URL, BLEU: 10.0, chr-F: 0.383\ntestset: URL, BLEU: 0.1, chr-F: 0.003\ntestset: URL, BLEU: 0.0, chr-F: 0.147\ntestset: URL, BLEU: 7.6, chr-F: 0.275\ntestset: URL, BLEU: 0.8, chr-F: 0.060\ntestset: URL, BLEU: 32.1, chr-F: 0.542\ntestset: URL, BLEU: 37.0, chr-F: 0.595\ntestset: URL, BLEU: 9.6, chr-F: 0.409\ntestset: URL, BLEU: 24.0, chr-F: 0.475\ntestset: URL, BLEU: 3.9, chr-F: 0.228\ntestset: URL, BLEU: 0.7, chr-F: 0.013\ntestset: URL, BLEU: 2.6, chr-F: 0.212\ntestset: URL, BLEU: 6.0, chr-F: 0.190\ntestset: URL, BLEU: 6.5, chr-F: 0.369\ntestset: URL, BLEU: 0.9, chr-F: 0.086\ntestset: URL, BLEU: 4.2, chr-F: 0.174\ntestset: URL, BLEU: 9.9, chr-F: 0.361\ntestset: URL, BLEU: 3.4, chr-F: 0.230\ntestset: URL, BLEU: 18.0, chr-F: 0.418\ntestset: URL, BLEU: 42.5, chr-F: 0.624\ntestset: URL, BLEU: 25.2, chr-F: 0.505\ntestset: URL, BLEU: 0.9, chr-F: 0.121\ntestset: URL, BLEU: 0.3, chr-F: 0.084\ntestset: URL, BLEU: 0.2, chr-F: 0.040\ntestset: URL, BLEU: 0.4, chr-F: 0.085\ntestset: URL, BLEU: 28.7, chr-F: 0.543\ntestset: URL, BLEU: 3.3, chr-F: 0.295\ntestset: URL, BLEU: 33.4, chr-F: 0.570\ntestset: URL, BLEU: 30.3, chr-F: 0.545\ntestset: URL, BLEU: 18.5, chr-F: 0.486\ntestset: URL, BLEU: 6.8, chr-F: 0.272\ntestset: URL, BLEU: 5.0, chr-F: 0.228\ntestset: URL, BLEU: 5.2, chr-F: 0.277\ntestset: URL, BLEU: 6.9, chr-F: 0.265\ntestset: URL, BLEU: 31.5, chr-F: 0.365\ntestset: URL, BLEU: 18.5, chr-F: 0.459\ntestset: URL, BLEU: 0.9, chr-F: 0.132\ntestset: URL, BLEU: 31.5, chr-F: 0.546\ntestset: URL, BLEU: 0.9, chr-F: 0.128\ntestset: URL, BLEU: 3.0, chr-F: 0.025\ntestset: URL, BLEU: 14.4, chr-F: 0.387\ntestset: URL, BLEU: 0.4, chr-F: 0.061\ntestset: URL, BLEU: 0.3, chr-F: 0.075\ntestset: URL, BLEU: 47.4, chr-F: 0.706\ntestset: URL, BLEU: 10.9, chr-F: 0.341\ntestset: URL, BLEU: 26.8, chr-F: 0.493\ntestset: URL, BLEU: 32.5, chr-F: 0.565\ntestset: URL, BLEU: 21.5, chr-F: 0.395\ntestset: URL, BLEU: 0.3, chr-F: 0.124\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 0.0, chr-F: 0.005\ntestset: URL, BLEU: 1.5, chr-F: 0.129\ntestset: URL, BLEU: 0.6, chr-F: 0.106\ntestset: URL, BLEU: 15.4, chr-F: 0.347\ntestset: URL, BLEU: 31.1, chr-F: 0.527\ntestset: URL, BLEU: 6.5, chr-F: 0.385\ntestset: URL, BLEU: 0.2, chr-F: 0.066\ntestset: URL, BLEU: 28.7, chr-F: 0.531\ntestset: URL, BLEU: 21.3, chr-F: 0.443\ntestset: URL, BLEU: 2.8, chr-F: 0.268\ntestset: URL, BLEU: 12.0, chr-F: 0.463\ntestset: URL, BLEU: 13.0, chr-F: 0.401\ntestset: URL, BLEU: 0.2, chr-F: 0.073\ntestset: URL, BLEU: 0.2, chr-F: 0.077\ntestset: URL, BLEU: 5.7, chr-F: 0.308\ntestset: URL, BLEU: 17.1, chr-F: 0.431\ntestset: URL, BLEU: 15.0, chr-F: 0.378\ntestset: URL, BLEU: 16.0, chr-F: 0.437\ntestset: URL, BLEU: 2.9, chr-F: 0.221\ntestset: URL, BLEU: 11.5, chr-F: 0.403\ntestset: URL, BLEU: 2.3, chr-F: 0.089\ntestset: URL, BLEU: 4.3, chr-F: 0.282\ntestset: URL, BLEU: 26.4, chr-F: 0.522\ntestset: URL, BLEU: 20.9, chr-F: 0.493\ntestset: URL, BLEU: 12.5, chr-F: 0.375\ntestset: URL, BLEU: 33.9, chr-F: 0.592\ntestset: URL, BLEU: 4.6, chr-F: 0.050\ntestset: URL, BLEU: 7.8, chr-F: 0.328\ntestset: URL, BLEU: 0.1, chr-F: 0.123\ntestset: URL, BLEU: 6.4, chr-F: 0.008\ntestset: URL, BLEU: 0.0, chr-F: 0.000\ntestset: URL, BLEU: 5.9, chr-F: 0.261\ntestset: URL, BLEU: 13.4, chr-F: 0.382\ntestset: URL, BLEU: 4.8, chr-F: 0.358\ntestset: URL, BLEU: 1.8, chr-F: 0.115\ntestset: URL, BLEU: 8.8, chr-F: 0.354\ntestset: URL, BLEU: 3.7, chr-F: 0.188\ntestset: URL, BLEU: 0.5, chr-F: 0.094\ntestset: URL, BLEU: 0.4, chr-F: 0.243\ntestset: URL, BLEU: 5.2, chr-F: 0.362\ntestset: URL, BLEU: 17.2, chr-F: 0.416\ntestset: URL, BLEU: 0.6, chr-F: 0.009\ntestset: URL, BLEU: 5.5, chr-F: 0.005\ntestset: URL, BLEU: 2.4, chr-F: 0.012\ntestset: URL, BLEU: 2.0, chr-F: 0.099\ntestset: URL, BLEU: 0.4, chr-F: 0.074\ntestset: URL, BLEU: 0.9, chr-F: 0.007\ntestset: URL, BLEU: 9.1, chr-F: 0.174\ntestset: URL, BLEU: 1.2, chr-F: 0.154\ntestset: URL, BLEU: 0.1, chr-F: 0.001\ntestset: URL, BLEU: 0.6, chr-F: 0.426\ntestset: URL, BLEU: 8.2, chr-F: 0.366\ntestset: URL, BLEU: 20.4, chr-F: 0.475\ntestset: URL, BLEU: 0.3, chr-F: 0.059\ntestset: URL, BLEU: 0.5, chr-F: 0.104\ntestset: URL, BLEU: 0.2, chr-F: 0.094\ntestset: URL, BLEU: 1.2, chr-F: 0.276\ntestset: URL, BLEU: 17.4, chr-F: 0.488\ntestset: URL, BLEU: 0.3, chr-F: 0.039\ntestset: URL, BLEU: 0.3, chr-F: 0.041\ntestset: URL, BLEU: 0.1, chr-F: 0.083\ntestset: URL, BLEU: 1.4, chr-F: 0.154\ntestset: URL, BLEU: 19.1, chr-F: 0.395\ntestset: URL, BLEU: 4.2, chr-F: 0.382\ntestset: URL, BLEU: 2.1, chr-F: 0.075\ntestset: URL, BLEU: 9.5, chr-F: 0.331\ntestset: URL, BLEU: 9.3, chr-F: 0.372\ntestset: URL, BLEU: 8.3, chr-F: 0.437\ntestset: URL, BLEU: 13.5, chr-F: 0.410\ntestset: URL, BLEU: 2.3, chr-F: 0.008\ntestset: URL, BLEU: 83.6, chr-F: 0.905\ntestset: URL, BLEU: 7.6, chr-F: 0.214\ntestset: URL, BLEU: 31.8, chr-F: 0.540\ntestset: URL, BLEU: 31.3, chr-F: 0.464\ntestset: URL, BLEU: 11.7, chr-F: 0.427\ntestset: URL, BLEU: 0.1, chr-F: 0.000\ntestset: URL, BLEU: 0.6, chr-F: 0.067\ntestset: URL, BLEU: 8.5, chr-F: 0.323\ntestset: URL, BLEU: 8.5, chr-F: 0.320\ntestset: URL, BLEU: 24.5, chr-F: 0.498\ntestset: URL, BLEU: 22.4, chr-F: 0.451\ntestset: URL, BLEU: 3.8, chr-F: 0.169\ntestset: URL, BLEU: 0.2, chr-F: 0.123\ntestset: URL, BLEU: 1.1, chr-F: 0.014\ntestset: URL, BLEU: 0.6, chr-F: 0.109\ntestset: URL, BLEU: 1.8, chr-F: 0.149\ntestset: URL, BLEU: 11.3, chr-F: 0.365\ntestset: URL, BLEU: 0.5, chr-F: 0.004\ntestset: URL, BLEU: 34.4, chr-F: 0.501\ntestset: URL, BLEU: 37.6, chr-F: 0.598\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 0.2, chr-F: 0.096\ntestset: URL, BLEU: 36.3, chr-F: 0.577\ntestset: URL, BLEU: 0.9, chr-F: 0.180\ntestset: URL, BLEU: 9.8, chr-F: 0.524\ntestset: URL, BLEU: 6.3, chr-F: 0.288\ntestset: URL, BLEU: 5.3, chr-F: 0.273\ntestset: URL, BLEU: 0.2, chr-F: 0.007\ntestset: URL, BLEU: 3.0, chr-F: 0.230\ntestset: URL, BLEU: 0.2, chr-F: 0.053\ntestset: URL, BLEU: 20.2, chr-F: 0.513\ntestset: URL, BLEU: 6.4, chr-F: 0.301\ntestset: URL, BLEU: 44.7, chr-F: 0.624\ntestset: URL, BLEU: 0.8, chr-F: 0.098\ntestset: URL, BLEU: 2.9, chr-F: 0.143\ntestset: URL, BLEU: 0.6, chr-F: 0.124\ntestset: URL, BLEU: 22.7, chr-F: 0.500\ntestset: URL, BLEU: 31.6, chr-F: 0.570\ntestset: URL, BLEU: 0.5, chr-F: 0.085\ntestset: URL, BLEU: 0.1, chr-F: 0.078\ntestset: URL, BLEU: 0.9, chr-F: 0.137\ntestset: URL, BLEU: 2.7, chr-F: 0.255\ntestset: URL, BLEU: 0.4, chr-F: 0.084\ntestset: URL, BLEU: 1.9, chr-F: 0.050\ntestset: URL, BLEU: 1.3, chr-F: 0.102\ntestset: URL, BLEU: 1.4, chr-F: 0.169\ntestset: URL, BLEU: 7.8, chr-F: 0.329\ntestset: URL, BLEU: 27.0, chr-F: 0.530\ntestset: URL, BLEU: 0.1, chr-F: 0.009\ntestset: URL, BLEU: 9.8, chr-F: 0.434\ntestset: URL, BLEU: 22.2, chr-F: 0.465\ntestset: URL, BLEU: 4.8, chr-F: 0.155\ntestset: URL, BLEU: 0.2, chr-F: 0.007\ntestset: URL, BLEU: 1.7, chr-F: 0.143\ntestset: URL, BLEU: 1.5, chr-F: 0.083\ntestset: URL, BLEU: 30.3, chr-F: 0.514\ntestset: URL, BLEU: 1.6, chr-F: 0.104\ntestset: URL, BLEU: 0.7, chr-F: 0.049\ntestset: URL, BLEU: 0.6, chr-F: 0.064\ntestset: URL, BLEU: 5.4, chr-F: 0.317\ntestset: URL, BLEU: 0.3, chr-F: 0.074\ntestset: URL, BLEU: 12.8, chr-F: 0.313\ntestset: URL, BLEU: 0.8, chr-F: 0.063\ntestset: URL, BLEU: 13.2, chr-F: 0.290\ntestset: URL, BLEU: 12.1, chr-F: 0.416\ntestset: URL, BLEU: 27.1, chr-F: 0.533\ntestset: URL, BLEU: 6.0, chr-F: 0.359\ntestset: URL, BLEU: 16.0, chr-F: 0.274\ntestset: URL, BLEU: 36.7, chr-F: 0.603\ntestset: URL, BLEU: 32.3, chr-F: 0.573\ntestset: URL, BLEU: 0.6, chr-F: 0.198\ntestset: URL, BLEU: 39.0, chr-F: 0.447\ntestset: URL, BLEU: 1.1, chr-F: 0.109\ntestset: URL, BLEU: 42.7, chr-F: 0.614\ntestset: URL, BLEU: 0.6, chr-F: 0.118\ntestset: URL, BLEU: 12.4, chr-F: 0.294\ntestset: URL, BLEU: 5.0, chr-F: 0.404\ntestset: URL, BLEU: 9.9, chr-F: 0.326\ntestset: URL, BLEU: 4.7, chr-F: 0.326\ntestset: URL, BLEU: 0.7, chr-F: 0.100\ntestset: URL, BLEU: 5.5, chr-F: 0.304\ntestset: URL, BLEU: 2.2, chr-F: 0.456\ntestset: URL, BLEU: 1.5, chr-F: 0.197\ntestset: URL, BLEU: 0.0, chr-F: 0.032\ntestset: URL, BLEU: 0.3, chr-F: 0.061\ntestset: URL, BLEU: 8.3, chr-F: 0.219\ntestset: URL, BLEU: 32.7, chr-F: 0.619\ntestset: URL, BLEU: 1.4, chr-F: 0.136\ntestset: URL, BLEU: 9.6, chr-F: 0.465\ntestset: URL, BLEU: 9.4, chr-F: 0.383\ntestset: URL, BLEU: 24.1, chr-F: 0.542\ntestset: URL, BLEU: 8.9, chr-F: 0.398\ntestset: URL, BLEU: 10.4, chr-F: 0.249\ntestset: URL, BLEU: 0.2, chr-F: 0.098\ntestset: URL, BLEU: 6.5, chr-F: 0.212\ntestset: URL, BLEU: 2.1, chr-F: 0.266\ntestset: URL, BLEU: 24.3, chr-F: 0.479\ntestset: URL, BLEU: 4.4, chr-F: 0.274\ntestset: URL, BLEU: 8.6, chr-F: 0.344\ntestset: URL, BLEU: 6.9, chr-F: 0.343\ntestset: URL, BLEU: 1.0, chr-F: 0.094\ntestset: URL, BLEU: 23.2, chr-F: 0.420\ntestset: URL, BLEU: 0.3, chr-F: 0.086\ntestset: URL, BLEU: 11.4, chr-F: 0.415\ntestset: URL, BLEU: 8.4, chr-F: 0.218\ntestset: URL, BLEU: 11.5, chr-F: 0.252\ntestset: URL, BLEU: 0.1, chr-F: 0.007\ntestset: URL, BLEU: 19.5, chr-F: 0.552\ntestset: URL, BLEU: 4.0, chr-F: 0.256\ntestset: URL, BLEU: 8.8, chr-F: 0.247\ntestset: URL, BLEU: 21.8, chr-F: 0.192\ntestset: URL, BLEU: 34.3, chr-F: 0.655\ntestset: URL, BLEU: 0.5, chr-F: 0.080",
"### System Info:\n\n\n* hf\\_name: eng-mul\n* source\\_languages: eng\n* target\\_languages: mul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'sjn\\_Latn', 'cat', 'nan', 'spa', 'ile\\_Latn', 'pap', 'mwl', 'uzb\\_Latn', 'mww', 'hil', 'lij', 'avk\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'bos\\_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi\\_Latn', 'awa', 'swg', 'zsm\\_Latn', 'zho\\_Hant', 'gcf\\_Latn', 'uzb\\_Cyrl', 'isl', 'lfn\\_Latn', 'shs\\_Latn', 'nov\\_Latn', 'bho', 'ltz', 'lzh', 'kur\\_Latn', 'sun', 'arg', 'pes\\_Thaa', 'sqi', 'uig\\_Arab', 'csb\\_Latn', 'fra', 'hat', 'liv\\_Latn', 'non\\_Latn', 'sco', 'cmn\\_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul\\_Latn', 'amh', 'lfn\\_Cyrl', 'eus', 'fkv\\_Latn', 'tur', 'pus', 'afr', 'brx\\_Latn', 'nya', 'acm', 'ota\\_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho\\_Hans', 'tmw\\_Latn', 'kjh', 'ota\\_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh\\_Hans', 'ara', 'tly\\_Latn', 'lug', 'brx', 'bul', 'bel', 'vol\\_Latn', 'kat', 'gan', 'got\\_Goth', 'vro', 'ext', 'afh\\_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif\\_Latn', 'cjy\\_Hant', 'bre', 'ceb', 'mah', 'nob\\_Hebr', 'crh\\_Latn', 'prg\\_Latn', 'khm', 'ang\\_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze\\_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie\\_Hani', 'arz', 'yue', 'kha', 'san\\_Deva', 'jbo\\_Latn', 'gos', 'hau\\_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig\\_Cyrl', 'fao', 'mnw', 'zho', 'orv\\_Cyrl', 'iba', 'bel\\_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc\\_Grek', 'tpw\\_Latn', 'oci', 'mfe', 'sna', 'kir\\_Cyrl', 'tat\\_Latn', 'gom', 'ido\\_Latn', 'sgs', 'pau', 'tgk\\_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp\\_Latn', 'wuu', 'dtp', 'jbo\\_Cyrl', 'tet', 'bod', 'yue\\_Hans', 'zlm\\_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz\\_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif\\_Latn', 'vie', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina\\_Latn', 'cjy\\_Hans', 'jdt\\_Cyrl', 'gsw', 'glv', 'khm\\_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd\\_Arab', 'arq', 'mri', 'kur\\_Arab', 'por', 'hin', 'shy\\_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue\\_Hant', 'kpv', 'tam', 'est', 'frm\\_Latn', 'hoc\\_Latn', 'bam\\_Latn', 'kek\\_Latn', 'ksh', 'tlh\\_Latn', 'ltg', 'pan\\_Guru', 'hnj\\_Latn', 'cor', 'gle', 'swe', 'lin', 'qya\\_Latn', 'kum', 'mad', 'cmn\\_Hant', 'fuv', 'nau', 'mon', 'akl\\_Latn', 'guj', 'kaz\\_Latn', 'wln', 'tuk\\_Latn', 'jav\\_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws\\_Latn', 'urd', 'stq', 'tat\\_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl\\_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld\\_Latn', 'tzl\\_Latn', 'mdf', 'ike\\_Latn', 'ces', 'ldn\\_Latn', 'egl', 'heb', 'vec', 'zul', 'max\\_Latn', 'pes\\_Latn', 'yid', 'mal', 'nds'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: mul\n* short\\_pair: en-mul\n* chrF2\\_score: 0.451\n* bleu: 22.4\n* brevity\\_penalty: 0.987\n* ref\\_len: 68724.0\n* src\\_name: English\n* tgt\\_name: Multiple languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: mul\n* prefer\\_old: False\n* long\\_pair: eng-mul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ca #es #os #eo #ro #fy #cy #is #lb #su #an #sq #fr #ht #rm #cv #ig #am #eu #tr #ps #af #ny #ch #uk #sl #lt #tk #sg #ar #lg #bg #be #ka #gd #ja #si #br #mh #km #th #ty #rw #te #mk #or #wo #kl #mr #ru #yo #hu #fo #zh #ti #co #ee #oc #sn #mt #ts #pl #gl #nb #bn #tt #bo #lo #id #gn #nv #hy #kn #to #io #so #vi #da #fj #gv #sm #nl #mi #pt #hi #se #as #ta #et #kw #ga #sv #ln #na #mn #gu #wa #lv #jv #el #my #ba #it #hr #ur #ce #nn #fi #mg #rn #xh #ab #de #cs #he #zu #yi #ml #mul #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-mul\n\n\n* source group: English\n* target group: Multiple languages\n* OPUS readme: eng-mul\n* model: transformer\n* source language(s): eng\n* target language(s): abk acm ady afb afh\\_Latn afr akl\\_Latn aln amh ang\\_Latn apc ara arg arq ary arz asm ast avk\\_Latn awa aze\\_Latn bak bam\\_Latn bel bel\\_Latn ben bho bod bos\\_Latn bre brx brx\\_Latn bul bul\\_Latn cat ceb ces cha che chr chv cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant cor cos crh crh\\_Latn csb\\_Latn cym dan deu dsb dtp dws\\_Latn egl ell enm\\_Latn epo est eus ewe ext fao fij fin fkv\\_Latn fra frm\\_Latn frr fry fuc fuv gan gcf\\_Latn gil gla gle glg glv gom gos got\\_Goth grc\\_Grek grn gsw guj hat hau\\_Latn haw heb hif\\_Latn hil hin hnj\\_Latn hoc hoc\\_Latn hrv hsb hun hye iba ibo ido ido\\_Latn ike\\_Latn ile\\_Latn ilo ina\\_Latn ind isl ita izh jav jav\\_Java jbo jbo\\_Cyrl jbo\\_Latn jdt\\_Cyrl jpn kab kal kan kat kaz\\_Cyrl kaz\\_Latn kek\\_Latn kha khm khm\\_Latn kin kir\\_Cyrl kjh kpv krl ksh kum kur\\_Arab kur\\_Latn lad lad\\_Latn lao lat\\_Latn lav ldn\\_Latn lfn\\_Cyrl lfn\\_Latn lij lin lit liv\\_Latn lkt lld\\_Latn lmo ltg ltz lug lzh lzh\\_Hans mad mah mai mal mar max\\_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob\\_Hebr nog non\\_Latn nov\\_Latn npi nya oci ori orv\\_Cyrl oss ota\\_Arab ota\\_Latn pag pan\\_Guru pap pau pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por ppl\\_Latn prg\\_Latn pus quc qya qya\\_Latn rap rif\\_Latn roh rom ron rue run rus sag sah san\\_Deva scn sco sgs shs\\_Latn shy\\_Latn sin sjn\\_Latn slv sma sme smo sna snd\\_Arab som spa sqi srp\\_Cyrl srp\\_Latn stq sun swe swg swh tah tam tat tat\\_Arab tat\\_Latn tel tet tgk\\_Cyrl tha tir tlh\\_Latn tly\\_Latn tmw\\_Latn toi\\_Latn ton tpw\\_Latn tso tuk tuk\\_Latn tur tvl tyv tzl tzl\\_Latn udm uig\\_Arab uig\\_Cyrl ukr umb urd uzb\\_Cyrl uzb\\_Latn vec vie vie\\_Hani vol\\_Latn vro war wln wol wuu xal xho yid yor yue yue\\_Hans yue\\_Hant zho zho\\_Hans zho\\_Hant zlm\\_Latn zsm\\_Latn zul zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 5.0, chr-F: 0.288\ntestset: URL, BLEU: 9.3, chr-F: 0.418\ntestset: URL, BLEU: 17.2, chr-F: 0.488\ntestset: URL, BLEU: 8.2, chr-F: 0.402\ntestset: URL, BLEU: 12.9, chr-F: 0.444\ntestset: URL, BLEU: 17.6, chr-F: 0.170\ntestset: URL, BLEU: 10.9, chr-F: 0.423\ntestset: URL, BLEU: 5.2, chr-F: 0.284\ntestset: URL, BLEU: 11.0, chr-F: 0.431\ntestset: URL, BLEU: 22.6, chr-F: 0.521\ntestset: URL, BLEU: 25.9, chr-F: 0.546\ntestset: URL, BLEU: 10.3, chr-F: 0.394\ntestset: URL, BLEU: 13.3, chr-F: 0.459\ntestset: URL, BLEU: 21.5, chr-F: 0.522\ntestset: URL, BLEU: 8.1, chr-F: 0.371\ntestset: URL, BLEU: 22.1, chr-F: 0.540\ntestset: URL, BLEU: 23.8, chr-F: 0.531\ntestset: URL, BLEU: 9.0, chr-F: 0.376\ntestset: URL, BLEU: 14.2, chr-F: 0.451\ntestset: URL, BLEU: 19.8, chr-F: 0.500\ntestset: URL, BLEU: 22.8, chr-F: 0.518\ntestset: URL, BLEU: 9.8, chr-F: 0.392\ntestset: URL, BLEU: 13.7, chr-F: 0.454\ntestset: URL, BLEU: 20.7, chr-F: 0.514\ntestset: URL, BLEU: 8.4, chr-F: 0.370\ntestset: URL, BLEU: 22.4, chr-F: 0.538\ntestset: URL, BLEU: 23.5, chr-F: 0.532\ntestset: URL, BLEU: 10.0, chr-F: 0.393\ntestset: URL, BLEU: 15.2, chr-F: 0.463\ntestset: URL, BLEU: 22.0, chr-F: 0.524\ntestset: URL, BLEU: 27.2, chr-F: 0.556\ntestset: URL, BLEU: 10.8, chr-F: 0.392\ntestset: URL, BLEU: 14.2, chr-F: 0.449\ntestset: URL, BLEU: 24.3, chr-F: 0.544\ntestset: URL, BLEU: 28.3, chr-F: 0.559\ntestset: URL, BLEU: 9.9, chr-F: 0.377\ntestset: URL, BLEU: 14.3, chr-F: 0.449\ntestset: URL, BLEU: 23.2, chr-F: 0.530\ntestset: URL, BLEU: 16.0, chr-F: 0.463\ntestset: URL, BLEU: 27.8, chr-F: 0.555\ntestset: URL, BLEU: 11.0, chr-F: 0.392\ntestset: URL, BLEU: 16.4, chr-F: 0.469\ntestset: URL, BLEU: 22.6, chr-F: 0.515\ntestset: URL, BLEU: 12.1, chr-F: 0.414\ntestset: URL, BLEU: 24.9, chr-F: 0.532\ntestset: URL, BLEU: 7.2, chr-F: 0.311\ntestset: URL, BLEU: 10.9, chr-F: 0.396\ntestset: URL, BLEU: 18.3, chr-F: 0.490\ntestset: URL, BLEU: 10.1, chr-F: 0.421\ntestset: URL, BLEU: 14.5, chr-F: 0.445\ntestset: URL, BLEU: 12.2, chr-F: 0.408\ntestset: URL, BLEU: 21.4, chr-F: 0.517\ntestset: URL, BLEU: 11.2, chr-F: 0.435\ntestset: URL, BLEU: 16.6, chr-F: 0.472\ntestset: URL, BLEU: 13.4, chr-F: 0.435\ntestset: URL, BLEU: 8.1, chr-F: 0.385\ntestset: URL, BLEU: 9.6, chr-F: 0.377\ntestset: URL, BLEU: 17.9, chr-F: 0.482\ntestset: URL, BLEU: 11.8, chr-F: 0.440\ntestset: URL, BLEU: 9.6, chr-F: 0.412\ntestset: URL, BLEU: 14.1, chr-F: 0.446\ntestset: URL, BLEU: 8.0, chr-F: 0.378\ntestset: URL, BLEU: 16.8, chr-F: 0.175\ntestset: URL, BLEU: 9.8, chr-F: 0.380\ntestset: URL, BLEU: 23.8, chr-F: 0.536\ntestset: URL, BLEU: 11.8, chr-F: 0.433\ntestset: URL, BLEU: 7.8, chr-F: 0.398\ntestset: URL, BLEU: 12.2, chr-F: 0.434\ntestset: URL, BLEU: 7.5, chr-F: 0.383\ntestset: URL, BLEU: 18.3, chr-F: 0.179\ntestset: URL, BLEU: 10.7, chr-F: 0.389\ntestset: URL, BLEU: 21.0, chr-F: 0.512\ntestset: URL, BLEU: 10.4, chr-F: 0.420\ntestset: URL, BLEU: 5.8, chr-F: 0.297\ntestset: URL, BLEU: 8.0, chr-F: 0.388\ntestset: URL, BLEU: 13.0, chr-F: 0.415\ntestset: URL, BLEU: 15.0, chr-F: 0.192\ntestset: URL, BLEU: 9.0, chr-F: 0.414\ntestset: URL, BLEU: 9.5, chr-F: 0.415\ntestset: URL, BLEU: 4.2, chr-F: 0.275\ntestset: URL, BLEU: 0.4, chr-F: 0.006\ntestset: URL, BLEU: 1.0, chr-F: 0.058\ntestset: URL, BLEU: 47.0, chr-F: 0.663\ntestset: URL, BLEU: 2.7, chr-F: 0.080\ntestset: URL, BLEU: 8.5, chr-F: 0.455\ntestset: URL, BLEU: 6.2, chr-F: 0.138\ntestset: URL, BLEU: 6.3, chr-F: 0.325\ntestset: URL, BLEU: 1.5, chr-F: 0.107\ntestset: URL, BLEU: 2.1, chr-F: 0.265\ntestset: URL, BLEU: 15.7, chr-F: 0.393\ntestset: URL, BLEU: 0.2, chr-F: 0.095\ntestset: URL, BLEU: 0.1, chr-F: 0.002\ntestset: URL, BLEU: 19.0, chr-F: 0.500\ntestset: URL, BLEU: 12.7, chr-F: 0.379\ntestset: URL, BLEU: 8.3, chr-F: 0.037\ntestset: URL, BLEU: 13.5, chr-F: 0.396\ntestset: URL, BLEU: 10.0, chr-F: 0.383\ntestset: URL, BLEU: 0.1, chr-F: 0.003\ntestset: URL, BLEU: 0.0, chr-F: 0.147\ntestset: URL, BLEU: 7.6, chr-F: 0.275\ntestset: URL, BLEU: 0.8, chr-F: 0.060\ntestset: URL, BLEU: 32.1, chr-F: 0.542\ntestset: URL, BLEU: 37.0, chr-F: 0.595\ntestset: URL, BLEU: 9.6, chr-F: 0.409\ntestset: URL, BLEU: 24.0, chr-F: 0.475\ntestset: URL, BLEU: 3.9, chr-F: 0.228\ntestset: URL, BLEU: 0.7, chr-F: 0.013\ntestset: URL, BLEU: 2.6, chr-F: 0.212\ntestset: URL, BLEU: 6.0, chr-F: 0.190\ntestset: URL, BLEU: 6.5, chr-F: 0.369\ntestset: URL, BLEU: 0.9, chr-F: 0.086\ntestset: URL, BLEU: 4.2, chr-F: 0.174\ntestset: URL, BLEU: 9.9, chr-F: 0.361\ntestset: URL, BLEU: 3.4, chr-F: 0.230\ntestset: URL, BLEU: 18.0, chr-F: 0.418\ntestset: URL, BLEU: 42.5, chr-F: 0.624\ntestset: URL, BLEU: 25.2, chr-F: 0.505\ntestset: URL, BLEU: 0.9, chr-F: 0.121\ntestset: URL, BLEU: 0.3, chr-F: 0.084\ntestset: URL, BLEU: 0.2, chr-F: 0.040\ntestset: URL, BLEU: 0.4, chr-F: 0.085\ntestset: URL, BLEU: 28.7, chr-F: 0.543\ntestset: URL, BLEU: 3.3, chr-F: 0.295\ntestset: URL, BLEU: 33.4, chr-F: 0.570\ntestset: URL, BLEU: 30.3, chr-F: 0.545\ntestset: URL, BLEU: 18.5, chr-F: 0.486\ntestset: URL, BLEU: 6.8, chr-F: 0.272\ntestset: URL, BLEU: 5.0, chr-F: 0.228\ntestset: URL, BLEU: 5.2, chr-F: 0.277\ntestset: URL, BLEU: 6.9, chr-F: 0.265\ntestset: URL, BLEU: 31.5, chr-F: 0.365\ntestset: URL, BLEU: 18.5, chr-F: 0.459\ntestset: URL, BLEU: 0.9, chr-F: 0.132\ntestset: URL, BLEU: 31.5, chr-F: 0.546\ntestset: URL, BLEU: 0.9, chr-F: 0.128\ntestset: URL, BLEU: 3.0, chr-F: 0.025\ntestset: URL, BLEU: 14.4, chr-F: 0.387\ntestset: URL, BLEU: 0.4, chr-F: 0.061\ntestset: URL, BLEU: 0.3, chr-F: 0.075\ntestset: URL, BLEU: 47.4, chr-F: 0.706\ntestset: URL, BLEU: 10.9, chr-F: 0.341\ntestset: URL, BLEU: 26.8, chr-F: 0.493\ntestset: URL, BLEU: 32.5, chr-F: 0.565\ntestset: URL, BLEU: 21.5, chr-F: 0.395\ntestset: URL, BLEU: 0.3, chr-F: 0.124\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 0.0, chr-F: 0.005\ntestset: URL, BLEU: 1.5, chr-F: 0.129\ntestset: URL, BLEU: 0.6, chr-F: 0.106\ntestset: URL, BLEU: 15.4, chr-F: 0.347\ntestset: URL, BLEU: 31.1, chr-F: 0.527\ntestset: URL, BLEU: 6.5, chr-F: 0.385\ntestset: URL, BLEU: 0.2, chr-F: 0.066\ntestset: URL, BLEU: 28.7, chr-F: 0.531\ntestset: URL, BLEU: 21.3, chr-F: 0.443\ntestset: URL, BLEU: 2.8, chr-F: 0.268\ntestset: URL, BLEU: 12.0, chr-F: 0.463\ntestset: URL, BLEU: 13.0, chr-F: 0.401\ntestset: URL, BLEU: 0.2, chr-F: 0.073\ntestset: URL, BLEU: 0.2, chr-F: 0.077\ntestset: URL, BLEU: 5.7, chr-F: 0.308\ntestset: URL, BLEU: 17.1, chr-F: 0.431\ntestset: URL, BLEU: 15.0, chr-F: 0.378\ntestset: URL, BLEU: 16.0, chr-F: 0.437\ntestset: URL, BLEU: 2.9, chr-F: 0.221\ntestset: URL, BLEU: 11.5, chr-F: 0.403\ntestset: URL, BLEU: 2.3, chr-F: 0.089\ntestset: URL, BLEU: 4.3, chr-F: 0.282\ntestset: URL, BLEU: 26.4, chr-F: 0.522\ntestset: URL, BLEU: 20.9, chr-F: 0.493\ntestset: URL, BLEU: 12.5, chr-F: 0.375\ntestset: URL, BLEU: 33.9, chr-F: 0.592\ntestset: URL, BLEU: 4.6, chr-F: 0.050\ntestset: URL, BLEU: 7.8, chr-F: 0.328\ntestset: URL, BLEU: 0.1, chr-F: 0.123\ntestset: URL, BLEU: 6.4, chr-F: 0.008\ntestset: URL, BLEU: 0.0, chr-F: 0.000\ntestset: URL, BLEU: 5.9, chr-F: 0.261\ntestset: URL, BLEU: 13.4, chr-F: 0.382\ntestset: URL, BLEU: 4.8, chr-F: 0.358\ntestset: URL, BLEU: 1.8, chr-F: 0.115\ntestset: URL, BLEU: 8.8, chr-F: 0.354\ntestset: URL, BLEU: 3.7, chr-F: 0.188\ntestset: URL, BLEU: 0.5, chr-F: 0.094\ntestset: URL, BLEU: 0.4, chr-F: 0.243\ntestset: URL, BLEU: 5.2, chr-F: 0.362\ntestset: URL, BLEU: 17.2, chr-F: 0.416\ntestset: URL, BLEU: 0.6, chr-F: 0.009\ntestset: URL, BLEU: 5.5, chr-F: 0.005\ntestset: URL, BLEU: 2.4, chr-F: 0.012\ntestset: URL, BLEU: 2.0, chr-F: 0.099\ntestset: URL, BLEU: 0.4, chr-F: 0.074\ntestset: URL, BLEU: 0.9, chr-F: 0.007\ntestset: URL, BLEU: 9.1, chr-F: 0.174\ntestset: URL, BLEU: 1.2, chr-F: 0.154\ntestset: URL, BLEU: 0.1, chr-F: 0.001\ntestset: URL, BLEU: 0.6, chr-F: 0.426\ntestset: URL, BLEU: 8.2, chr-F: 0.366\ntestset: URL, BLEU: 20.4, chr-F: 0.475\ntestset: URL, BLEU: 0.3, chr-F: 0.059\ntestset: URL, BLEU: 0.5, chr-F: 0.104\ntestset: URL, BLEU: 0.2, chr-F: 0.094\ntestset: URL, BLEU: 1.2, chr-F: 0.276\ntestset: URL, BLEU: 17.4, chr-F: 0.488\ntestset: URL, BLEU: 0.3, chr-F: 0.039\ntestset: URL, BLEU: 0.3, chr-F: 0.041\ntestset: URL, BLEU: 0.1, chr-F: 0.083\ntestset: URL, BLEU: 1.4, chr-F: 0.154\ntestset: URL, BLEU: 19.1, chr-F: 0.395\ntestset: URL, BLEU: 4.2, chr-F: 0.382\ntestset: URL, BLEU: 2.1, chr-F: 0.075\ntestset: URL, BLEU: 9.5, chr-F: 0.331\ntestset: URL, BLEU: 9.3, chr-F: 0.372\ntestset: URL, BLEU: 8.3, chr-F: 0.437\ntestset: URL, BLEU: 13.5, chr-F: 0.410\ntestset: URL, BLEU: 2.3, chr-F: 0.008\ntestset: URL, BLEU: 83.6, chr-F: 0.905\ntestset: URL, BLEU: 7.6, chr-F: 0.214\ntestset: URL, BLEU: 31.8, chr-F: 0.540\ntestset: URL, BLEU: 31.3, chr-F: 0.464\ntestset: URL, BLEU: 11.7, chr-F: 0.427\ntestset: URL, BLEU: 0.1, chr-F: 0.000\ntestset: URL, BLEU: 0.6, chr-F: 0.067\ntestset: URL, BLEU: 8.5, chr-F: 0.323\ntestset: URL, BLEU: 8.5, chr-F: 0.320\ntestset: URL, BLEU: 24.5, chr-F: 0.498\ntestset: URL, BLEU: 22.4, chr-F: 0.451\ntestset: URL, BLEU: 3.8, chr-F: 0.169\ntestset: URL, BLEU: 0.2, chr-F: 0.123\ntestset: URL, BLEU: 1.1, chr-F: 0.014\ntestset: URL, BLEU: 0.6, chr-F: 0.109\ntestset: URL, BLEU: 1.8, chr-F: 0.149\ntestset: URL, BLEU: 11.3, chr-F: 0.365\ntestset: URL, BLEU: 0.5, chr-F: 0.004\ntestset: URL, BLEU: 34.4, chr-F: 0.501\ntestset: URL, BLEU: 37.6, chr-F: 0.598\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 0.2, chr-F: 0.096\ntestset: URL, BLEU: 36.3, chr-F: 0.577\ntestset: URL, BLEU: 0.9, chr-F: 0.180\ntestset: URL, BLEU: 9.8, chr-F: 0.524\ntestset: URL, BLEU: 6.3, chr-F: 0.288\ntestset: URL, BLEU: 5.3, chr-F: 0.273\ntestset: URL, BLEU: 0.2, chr-F: 0.007\ntestset: URL, BLEU: 3.0, chr-F: 0.230\ntestset: URL, BLEU: 0.2, chr-F: 0.053\ntestset: URL, BLEU: 20.2, chr-F: 0.513\ntestset: URL, BLEU: 6.4, chr-F: 0.301\ntestset: URL, BLEU: 44.7, chr-F: 0.624\ntestset: URL, BLEU: 0.8, chr-F: 0.098\ntestset: URL, BLEU: 2.9, chr-F: 0.143\ntestset: URL, BLEU: 0.6, chr-F: 0.124\ntestset: URL, BLEU: 22.7, chr-F: 0.500\ntestset: URL, BLEU: 31.6, chr-F: 0.570\ntestset: URL, BLEU: 0.5, chr-F: 0.085\ntestset: URL, BLEU: 0.1, chr-F: 0.078\ntestset: URL, BLEU: 0.9, chr-F: 0.137\ntestset: URL, BLEU: 2.7, chr-F: 0.255\ntestset: URL, BLEU: 0.4, chr-F: 0.084\ntestset: URL, BLEU: 1.9, chr-F: 0.050\ntestset: URL, BLEU: 1.3, chr-F: 0.102\ntestset: URL, BLEU: 1.4, chr-F: 0.169\ntestset: URL, BLEU: 7.8, chr-F: 0.329\ntestset: URL, BLEU: 27.0, chr-F: 0.530\ntestset: URL, BLEU: 0.1, chr-F: 0.009\ntestset: URL, BLEU: 9.8, chr-F: 0.434\ntestset: URL, BLEU: 22.2, chr-F: 0.465\ntestset: URL, BLEU: 4.8, chr-F: 0.155\ntestset: URL, BLEU: 0.2, chr-F: 0.007\ntestset: URL, BLEU: 1.7, chr-F: 0.143\ntestset: URL, BLEU: 1.5, chr-F: 0.083\ntestset: URL, BLEU: 30.3, chr-F: 0.514\ntestset: URL, BLEU: 1.6, chr-F: 0.104\ntestset: URL, BLEU: 0.7, chr-F: 0.049\ntestset: URL, BLEU: 0.6, chr-F: 0.064\ntestset: URL, BLEU: 5.4, chr-F: 0.317\ntestset: URL, BLEU: 0.3, chr-F: 0.074\ntestset: URL, BLEU: 12.8, chr-F: 0.313\ntestset: URL, BLEU: 0.8, chr-F: 0.063\ntestset: URL, BLEU: 13.2, chr-F: 0.290\ntestset: URL, BLEU: 12.1, chr-F: 0.416\ntestset: URL, BLEU: 27.1, chr-F: 0.533\ntestset: URL, BLEU: 6.0, chr-F: 0.359\ntestset: URL, BLEU: 16.0, chr-F: 0.274\ntestset: URL, BLEU: 36.7, chr-F: 0.603\ntestset: URL, BLEU: 32.3, chr-F: 0.573\ntestset: URL, BLEU: 0.6, chr-F: 0.198\ntestset: URL, BLEU: 39.0, chr-F: 0.447\ntestset: URL, BLEU: 1.1, chr-F: 0.109\ntestset: URL, BLEU: 42.7, chr-F: 0.614\ntestset: URL, BLEU: 0.6, chr-F: 0.118\ntestset: URL, BLEU: 12.4, chr-F: 0.294\ntestset: URL, BLEU: 5.0, chr-F: 0.404\ntestset: URL, BLEU: 9.9, chr-F: 0.326\ntestset: URL, BLEU: 4.7, chr-F: 0.326\ntestset: URL, BLEU: 0.7, chr-F: 0.100\ntestset: URL, BLEU: 5.5, chr-F: 0.304\ntestset: URL, BLEU: 2.2, chr-F: 0.456\ntestset: URL, BLEU: 1.5, chr-F: 0.197\ntestset: URL, BLEU: 0.0, chr-F: 0.032\ntestset: URL, BLEU: 0.3, chr-F: 0.061\ntestset: URL, BLEU: 8.3, chr-F: 0.219\ntestset: URL, BLEU: 32.7, chr-F: 0.619\ntestset: URL, BLEU: 1.4, chr-F: 0.136\ntestset: URL, BLEU: 9.6, chr-F: 0.465\ntestset: URL, BLEU: 9.4, chr-F: 0.383\ntestset: URL, BLEU: 24.1, chr-F: 0.542\ntestset: URL, BLEU: 8.9, chr-F: 0.398\ntestset: URL, BLEU: 10.4, chr-F: 0.249\ntestset: URL, BLEU: 0.2, chr-F: 0.098\ntestset: URL, BLEU: 6.5, chr-F: 0.212\ntestset: URL, BLEU: 2.1, chr-F: 0.266\ntestset: URL, BLEU: 24.3, chr-F: 0.479\ntestset: URL, BLEU: 4.4, chr-F: 0.274\ntestset: URL, BLEU: 8.6, chr-F: 0.344\ntestset: URL, BLEU: 6.9, chr-F: 0.343\ntestset: URL, BLEU: 1.0, chr-F: 0.094\ntestset: URL, BLEU: 23.2, chr-F: 0.420\ntestset: URL, BLEU: 0.3, chr-F: 0.086\ntestset: URL, BLEU: 11.4, chr-F: 0.415\ntestset: URL, BLEU: 8.4, chr-F: 0.218\ntestset: URL, BLEU: 11.5, chr-F: 0.252\ntestset: URL, BLEU: 0.1, chr-F: 0.007\ntestset: URL, BLEU: 19.5, chr-F: 0.552\ntestset: URL, BLEU: 4.0, chr-F: 0.256\ntestset: URL, BLEU: 8.8, chr-F: 0.247\ntestset: URL, BLEU: 21.8, chr-F: 0.192\ntestset: URL, BLEU: 34.3, chr-F: 0.655\ntestset: URL, BLEU: 0.5, chr-F: 0.080",
"### System Info:\n\n\n* hf\\_name: eng-mul\n* source\\_languages: eng\n* target\\_languages: mul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'sjn\\_Latn', 'cat', 'nan', 'spa', 'ile\\_Latn', 'pap', 'mwl', 'uzb\\_Latn', 'mww', 'hil', 'lij', 'avk\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'bos\\_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi\\_Latn', 'awa', 'swg', 'zsm\\_Latn', 'zho\\_Hant', 'gcf\\_Latn', 'uzb\\_Cyrl', 'isl', 'lfn\\_Latn', 'shs\\_Latn', 'nov\\_Latn', 'bho', 'ltz', 'lzh', 'kur\\_Latn', 'sun', 'arg', 'pes\\_Thaa', 'sqi', 'uig\\_Arab', 'csb\\_Latn', 'fra', 'hat', 'liv\\_Latn', 'non\\_Latn', 'sco', 'cmn\\_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul\\_Latn', 'amh', 'lfn\\_Cyrl', 'eus', 'fkv\\_Latn', 'tur', 'pus', 'afr', 'brx\\_Latn', 'nya', 'acm', 'ota\\_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho\\_Hans', 'tmw\\_Latn', 'kjh', 'ota\\_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh\\_Hans', 'ara', 'tly\\_Latn', 'lug', 'brx', 'bul', 'bel', 'vol\\_Latn', 'kat', 'gan', 'got\\_Goth', 'vro', 'ext', 'afh\\_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif\\_Latn', 'cjy\\_Hant', 'bre', 'ceb', 'mah', 'nob\\_Hebr', 'crh\\_Latn', 'prg\\_Latn', 'khm', 'ang\\_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze\\_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie\\_Hani', 'arz', 'yue', 'kha', 'san\\_Deva', 'jbo\\_Latn', 'gos', 'hau\\_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig\\_Cyrl', 'fao', 'mnw', 'zho', 'orv\\_Cyrl', 'iba', 'bel\\_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc\\_Grek', 'tpw\\_Latn', 'oci', 'mfe', 'sna', 'kir\\_Cyrl', 'tat\\_Latn', 'gom', 'ido\\_Latn', 'sgs', 'pau', 'tgk\\_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp\\_Latn', 'wuu', 'dtp', 'jbo\\_Cyrl', 'tet', 'bod', 'yue\\_Hans', 'zlm\\_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz\\_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif\\_Latn', 'vie', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina\\_Latn', 'cjy\\_Hans', 'jdt\\_Cyrl', 'gsw', 'glv', 'khm\\_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd\\_Arab', 'arq', 'mri', 'kur\\_Arab', 'por', 'hin', 'shy\\_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue\\_Hant', 'kpv', 'tam', 'est', 'frm\\_Latn', 'hoc\\_Latn', 'bam\\_Latn', 'kek\\_Latn', 'ksh', 'tlh\\_Latn', 'ltg', 'pan\\_Guru', 'hnj\\_Latn', 'cor', 'gle', 'swe', 'lin', 'qya\\_Latn', 'kum', 'mad', 'cmn\\_Hant', 'fuv', 'nau', 'mon', 'akl\\_Latn', 'guj', 'kaz\\_Latn', 'wln', 'tuk\\_Latn', 'jav\\_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws\\_Latn', 'urd', 'stq', 'tat\\_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl\\_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld\\_Latn', 'tzl\\_Latn', 'mdf', 'ike\\_Latn', 'ces', 'ldn\\_Latn', 'egl', 'heb', 'vec', 'zul', 'max\\_Latn', 'pes\\_Latn', 'yid', 'mal', 'nds'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: mul\n* short\\_pair: en-mul\n* chrF2\\_score: 0.451\n* bleu: 22.4\n* brevity\\_penalty: 0.987\n* ref\\_len: 68724.0\n* src\\_name: English\n* tgt\\_name: Multiple languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: mul\n* prefer\\_old: False\n* long\\_pair: eng-mul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
315,
8540,
2809
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ca #es #os #eo #ro #fy #cy #is #lb #su #an #sq #fr #ht #rm #cv #ig #am #eu #tr #ps #af #ny #ch #uk #sl #lt #tk #sg #ar #lg #bg #be #ka #gd #ja #si #br #mh #km #th #ty #rw #te #mk #or #wo #kl #mr #ru #yo #hu #fo #zh #ti #co #ee #oc #sn #mt #ts #pl #gl #nb #bn #tt #bo #lo #id #gn #nv #hy #kn #to #io #so #vi #da #fj #gv #sm #nl #mi #pt #hi #se #as #ta #et #kw #ga #sv #ln #na #mn #gu #wa #lv #jv #el #my #ba #it #hr #ur #ce #nn #fi #mg #rn #xh #ab #de #cs #he #zu #yi #ml #mul #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-mul\n\n\n* source group: English\n* target group: Multiple languages\n* OPUS readme: eng-mul\n* model: transformer\n* source language(s): eng\n* target language(s): abk acm ady afb afh\\_Latn afr akl\\_Latn aln amh ang\\_Latn apc ara arg arq ary arz asm ast avk\\_Latn awa aze\\_Latn bak bam\\_Latn bel bel\\_Latn ben bho bod bos\\_Latn bre brx brx\\_Latn bul bul\\_Latn cat ceb ces cha che chr chv cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant cor cos crh crh\\_Latn csb\\_Latn cym dan deu dsb dtp dws\\_Latn egl ell enm\\_Latn epo est eus ewe ext fao fij fin fkv\\_Latn fra frm\\_Latn frr fry fuc fuv gan gcf\\_Latn gil gla gle glg glv gom gos got\\_Goth grc\\_Grek grn gsw guj hat hau\\_Latn haw heb hif\\_Latn hil hin hnj\\_Latn hoc hoc\\_Latn hrv hsb hun hye iba ibo ido ido\\_Latn ike\\_Latn ile\\_Latn ilo ina\\_Latn ind isl ita izh jav jav\\_Java jbo jbo\\_Cyrl jbo\\_Latn jdt\\_Cyrl jpn kab kal kan kat kaz\\_Cyrl kaz\\_Latn kek\\_Latn kha khm khm\\_Latn kin kir\\_Cyrl kjh kpv krl ksh kum kur\\_Arab kur\\_Latn lad lad\\_Latn lao lat\\_Latn lav ldn\\_Latn lfn\\_Cyrl lfn\\_Latn lij lin lit liv\\_Latn lkt lld\\_Latn lmo ltg ltz lug lzh lzh\\_Hans mad mah mai mal mar max\\_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob\\_Hebr nog non\\_Latn nov\\_Latn npi nya oci ori orv\\_Cyrl oss ota\\_Arab ota\\_Latn pag pan\\_Guru pap pau pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por ppl\\_Latn prg\\_Latn pus quc qya qya\\_Latn rap rif\\_Latn roh rom ron rue run rus sag sah san\\_Deva scn sco sgs shs\\_Latn shy\\_Latn sin sjn\\_Latn slv sma sme smo sna snd\\_Arab som spa sqi srp\\_Cyrl srp\\_Latn stq sun swe swg swh tah tam tat tat\\_Arab tat\\_Latn tel tet tgk\\_Cyrl tha tir tlh\\_Latn tly\\_Latn tmw\\_Latn toi\\_Latn ton tpw\\_Latn tso tuk tuk\\_Latn tur tvl tyv tzl tzl\\_Latn udm uig\\_Arab uig\\_Cyrl ukr umb urd uzb\\_Cyrl uzb\\_Latn vec vie vie\\_Hani vol\\_Latn vro war wln wol wuu xal xho yid yor yue yue\\_Hans yue\\_Hant zho zho\\_Hans zho\\_Hant zlm\\_Latn zsm\\_Latn zul zza\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 5.0, chr-F: 0.288\ntestset: URL, BLEU: 9.3, chr-F: 0.418\ntestset: URL, BLEU: 17.2, chr-F: 0.488\ntestset: URL, BLEU: 8.2, chr-F: 0.402\ntestset: URL, BLEU: 12.9, chr-F: 0.444\ntestset: URL, BLEU: 17.6, chr-F: 0.170\ntestset: URL, BLEU: 10.9, chr-F: 0.423\ntestset: URL, BLEU: 5.2, chr-F: 0.284\ntestset: URL, BLEU: 11.0, chr-F: 0.431\ntestset: URL, BLEU: 22.6, chr-F: 0.521\ntestset: URL, BLEU: 25.9, chr-F: 0.546\ntestset: URL, BLEU: 10.3, chr-F: 0.394\ntestset: URL, BLEU: 13.3, chr-F: 0.459\ntestset: URL, BLEU: 21.5, chr-F: 0.522\ntestset: URL, BLEU: 8.1, chr-F: 0.371\ntestset: URL, BLEU: 22.1, chr-F: 0.540\ntestset: URL, BLEU: 23.8, chr-F: 0.531\ntestset: URL, BLEU: 9.0, chr-F: 0.376\ntestset: URL, BLEU: 14.2, chr-F: 0.451\ntestset: URL, BLEU: 19.8, chr-F: 0.500\ntestset: URL, BLEU: 22.8, chr-F: 0.518\ntestset: URL, BLEU: 9.8, chr-F: 0.392\ntestset: URL, BLEU: 13.7, chr-F: 0.454\ntestset: URL, BLEU: 20.7, chr-F: 0.514\ntestset: URL, BLEU: 8.4, chr-F: 0.370\ntestset: URL, BLEU: 22.4, chr-F: 0.538\ntestset: URL, BLEU: 23.5, chr-F: 0.532\ntestset: URL, BLEU: 10.0, chr-F: 0.393\ntestset: URL, BLEU: 15.2, chr-F: 0.463\ntestset: URL, BLEU: 22.0, chr-F: 0.524\ntestset: URL, BLEU: 27.2, chr-F: 0.556\ntestset: URL, BLEU: 10.8, chr-F: 0.392\ntestset: URL, BLEU: 14.2, chr-F: 0.449\ntestset: URL, BLEU: 24.3, chr-F: 0.544\ntestset: URL, BLEU: 28.3, chr-F: 0.559\ntestset: URL, BLEU: 9.9, chr-F: 0.377\ntestset: URL, BLEU: 14.3, chr-F: 0.449\ntestset: URL, BLEU: 23.2, chr-F: 0.530\ntestset: URL, BLEU: 16.0, chr-F: 0.463\ntestset: URL, BLEU: 27.8, chr-F: 0.555\ntestset: URL, BLEU: 11.0, chr-F: 0.392\ntestset: URL, BLEU: 16.4, chr-F: 0.469\ntestset: URL, BLEU: 22.6, chr-F: 0.515\ntestset: URL, BLEU: 12.1, chr-F: 0.414\ntestset: URL, BLEU: 24.9, chr-F: 0.532\ntestset: URL, BLEU: 7.2, chr-F: 0.311\ntestset: URL, BLEU: 10.9, chr-F: 0.396\ntestset: URL, BLEU: 18.3, chr-F: 0.490\ntestset: URL, BLEU: 10.1, chr-F: 0.421\ntestset: URL, BLEU: 14.5, chr-F: 0.445\ntestset: URL, BLEU: 12.2, chr-F: 0.408\ntestset: URL, BLEU: 21.4, chr-F: 0.517\ntestset: URL, BLEU: 11.2, chr-F: 0.435\ntestset: URL, BLEU: 16.6, chr-F: 0.472\ntestset: URL, BLEU: 13.4, chr-F: 0.435\ntestset: URL, BLEU: 8.1, chr-F: 0.385\ntestset: URL, BLEU: 9.6, chr-F: 0.377\ntestset: URL, BLEU: 17.9, chr-F: 0.482\ntestset: URL, BLEU: 11.8, chr-F: 0.440\ntestset: URL, BLEU: 9.6, chr-F: 0.412\ntestset: URL, BLEU: 14.1, chr-F: 0.446\ntestset: URL, BLEU: 8.0, chr-F: 0.378\ntestset: URL, BLEU: 16.8, chr-F: 0.175\ntestset: URL, BLEU: 9.8, chr-F: 0.380\ntestset: URL, BLEU: 23.8, chr-F: 0.536\ntestset: URL, BLEU: 11.8, chr-F: 0.433\ntestset: URL, BLEU: 7.8, chr-F: 0.398\ntestset: URL, BLEU: 12.2, chr-F: 0.434\ntestset: URL, BLEU: 7.5, chr-F: 0.383\ntestset: URL, BLEU: 18.3, chr-F: 0.179\ntestset: URL, BLEU: 10.7, chr-F: 0.389\ntestset: URL, BLEU: 21.0, chr-F: 0.512\ntestset: URL, BLEU: 10.4, chr-F: 0.420\ntestset: URL, BLEU: 5.8, chr-F: 0.297\ntestset: URL, BLEU: 8.0, chr-F: 0.388\ntestset: URL, BLEU: 13.0, chr-F: 0.415\ntestset: URL, BLEU: 15.0, chr-F: 0.192\ntestset: URL, BLEU: 9.0, chr-F: 0.414\ntestset: URL, BLEU: 9.5, chr-F: 0.415\ntestset: URL, BLEU: 4.2, chr-F: 0.275\ntestset: URL, BLEU: 0.4, chr-F: 0.006\ntestset: URL, BLEU: 1.0, chr-F: 0.058\ntestset: URL, BLEU: 47.0, chr-F: 0.663\ntestset: URL, BLEU: 2.7, chr-F: 0.080\ntestset: URL, BLEU: 8.5, chr-F: 0.455\ntestset: URL, BLEU: 6.2, chr-F: 0.138\ntestset: URL, BLEU: 6.3, chr-F: 0.325\ntestset: URL, BLEU: 1.5, chr-F: 0.107\ntestset: URL, BLEU: 2.1, chr-F: 0.265\ntestset: URL, BLEU: 15.7, chr-F: 0.393\ntestset: URL, BLEU: 0.2, chr-F: 0.095\ntestset: URL, BLEU: 0.1, chr-F: 0.002\ntestset: URL, BLEU: 19.0, chr-F: 0.500\ntestset: URL, BLEU: 12.7, chr-F: 0.379\ntestset: URL, BLEU: 8.3, chr-F: 0.037\ntestset: URL, BLEU: 13.5, chr-F: 0.396\ntestset: URL, BLEU: 10.0, chr-F: 0.383\ntestset: URL, BLEU: 0.1, chr-F: 0.003\ntestset: URL, BLEU: 0.0, chr-F: 0.147\ntestset: URL, BLEU: 7.6, chr-F: 0.275\ntestset: URL, BLEU: 0.8, chr-F: 0.060\ntestset: URL, BLEU: 32.1, chr-F: 0.542\ntestset: URL, BLEU: 37.0, chr-F: 0.595\ntestset: URL, BLEU: 9.6, chr-F: 0.409\ntestset: URL, BLEU: 24.0, chr-F: 0.475\ntestset: URL, BLEU: 3.9, chr-F: 0.228\ntestset: URL, BLEU: 0.7, chr-F: 0.013\ntestset: URL, BLEU: 2.6, chr-F: 0.212\ntestset: URL, BLEU: 6.0, chr-F: 0.190\ntestset: URL, BLEU: 6.5, chr-F: 0.369\ntestset: URL, BLEU: 0.9, chr-F: 0.086\ntestset: URL, BLEU: 4.2, chr-F: 0.174\ntestset: URL, BLEU: 9.9, chr-F: 0.361\ntestset: URL, BLEU: 3.4, chr-F: 0.230\ntestset: URL, BLEU: 18.0, chr-F: 0.418\ntestset: URL, BLEU: 42.5, chr-F: 0.624\ntestset: URL, BLEU: 25.2, chr-F: 0.505\ntestset: URL, BLEU: 0.9, chr-F: 0.121\ntestset: URL, BLEU: 0.3, chr-F: 0.084\ntestset: URL, BLEU: 0.2, chr-F: 0.040\ntestset: URL, BLEU: 0.4, chr-F: 0.085\ntestset: URL, BLEU: 28.7, chr-F: 0.543\ntestset: URL, BLEU: 3.3, chr-F: 0.295\ntestset: URL, BLEU: 33.4, chr-F: 0.570\ntestset: URL, BLEU: 30.3, chr-F: 0.545\ntestset: URL, BLEU: 18.5, chr-F: 0.486\ntestset: URL, BLEU: 6.8, chr-F: 0.272\ntestset: URL, BLEU: 5.0, chr-F: 0.228\ntestset: URL, BLEU: 5.2, chr-F: 0.277\ntestset: URL, BLEU: 6.9, chr-F: 0.265\ntestset: URL, BLEU: 31.5, chr-F: 0.365\ntestset: URL, BLEU: 18.5, chr-F: 0.459\ntestset: URL, BLEU: 0.9, chr-F: 0.132\ntestset: URL, BLEU: 31.5, chr-F: 0.546\ntestset: URL, BLEU: 0.9, chr-F: 0.128\ntestset: URL, BLEU: 3.0, chr-F: 0.025\ntestset: URL, BLEU: 14.4, chr-F: 0.387\ntestset: URL, BLEU: 0.4, chr-F: 0.061\ntestset: URL, BLEU: 0.3, chr-F: 0.075\ntestset: URL, BLEU: 47.4, chr-F: 0.706\ntestset: URL, BLEU: 10.9, chr-F: 0.341\ntestset: URL, BLEU: 26.8, chr-F: 0.493\ntestset: URL, BLEU: 32.5, chr-F: 0.565\ntestset: URL, BLEU: 21.5, chr-F: 0.395\ntestset: URL, BLEU: 0.3, chr-F: 0.124\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 0.0, chr-F: 0.005\ntestset: URL, BLEU: 1.5, chr-F: 0.129\ntestset: URL, BLEU: 0.6, chr-F: 0.106\ntestset: URL, BLEU: 15.4, chr-F: 0.347\ntestset: URL, BLEU: 31.1, chr-F: 0.527\ntestset: URL, BLEU: 6.5, chr-F: 0.385\ntestset: URL, BLEU: 0.2, chr-F: 0.066\ntestset: URL, BLEU: 28.7, chr-F: 0.531\ntestset: URL, BLEU: 21.3, chr-F: 0.443\ntestset: URL, BLEU: 2.8, chr-F: 0.268\ntestset: URL, BLEU: 12.0, chr-F: 0.463\ntestset: URL, BLEU: 13.0, chr-F: 0.401\ntestset: URL, BLEU: 0.2, chr-F: 0.073\ntestset: URL, BLEU: 0.2, chr-F: 0.077\ntestset: URL, BLEU: 5.7, chr-F: 0.308\ntestset: URL, BLEU: 17.1, chr-F: 0.431\ntestset: URL, BLEU: 15.0, chr-F: 0.378\ntestset: URL, BLEU: 16.0, chr-F: 0.437\ntestset: URL, BLEU: 2.9, chr-F: 0.221\ntestset: URL, BLEU: 11.5, chr-F: 0.403\ntestset: URL, BLEU: 2.3, chr-F: 0.089\ntestset: URL, BLEU: 4.3, chr-F: 0.282\ntestset: URL, BLEU: 26.4, chr-F: 0.522\ntestset: URL, BLEU: 20.9, chr-F: 0.493\ntestset: URL, BLEU: 12.5, chr-F: 0.375\ntestset: URL, BLEU: 33.9, chr-F: 0.592\ntestset: URL, BLEU: 4.6, chr-F: 0.050\ntestset: URL, BLEU: 7.8, chr-F: 0.328\ntestset: URL, BLEU: 0.1, chr-F: 0.123\ntestset: URL, BLEU: 6.4, chr-F: 0.008\ntestset: URL, BLEU: 0.0, chr-F: 0.000\ntestset: URL, BLEU: 5.9, chr-F: 0.261\ntestset: URL, BLEU: 13.4, chr-F: 0.382\ntestset: URL, BLEU: 4.8, chr-F: 0.358\ntestset: URL, BLEU: 1.8, chr-F: 0.115\ntestset: URL, BLEU: 8.8, chr-F: 0.354\ntestset: URL, BLEU: 3.7, chr-F: 0.188\ntestset: URL, BLEU: 0.5, chr-F: 0.094\ntestset: URL, BLEU: 0.4, chr-F: 0.243\ntestset: URL, BLEU: 5.2, chr-F: 0.362\ntestset: URL, BLEU: 17.2, chr-F: 0.416\ntestset: URL, BLEU: 0.6, chr-F: 0.009\ntestset: URL, BLEU: 5.5, chr-F: 0.005\ntestset: URL, BLEU: 2.4, chr-F: 0.012\ntestset: URL, BLEU: 2.0, chr-F: 0.099\ntestset: URL, BLEU: 0.4, chr-F: 0.074\ntestset: URL, BLEU: 0.9, chr-F: 0.007\ntestset: URL, BLEU: 9.1, chr-F: 0.174\ntestset: URL, BLEU: 1.2, chr-F: 0.154\ntestset: URL, BLEU: 0.1, chr-F: 0.001\ntestset: URL, BLEU: 0.6, chr-F: 0.426\ntestset: URL, BLEU: 8.2, chr-F: 0.366\ntestset: URL, BLEU: 20.4, chr-F: 0.475\ntestset: URL, BLEU: 0.3, chr-F: 0.059\ntestset: URL, BLEU: 0.5, chr-F: 0.104\ntestset: URL, BLEU: 0.2, chr-F: 0.094\ntestset: URL, BLEU: 1.2, chr-F: 0.276\ntestset: URL, BLEU: 17.4, chr-F: 0.488\ntestset: URL, BLEU: 0.3, chr-F: 0.039\ntestset: URL, BLEU: 0.3, chr-F: 0.041\ntestset: URL, BLEU: 0.1, chr-F: 0.083\ntestset: URL, BLEU: 1.4, chr-F: 0.154\ntestset: URL, BLEU: 19.1, chr-F: 0.395\ntestset: URL, BLEU: 4.2, chr-F: 0.382\ntestset: URL, BLEU: 2.1, chr-F: 0.075\ntestset: URL, BLEU: 9.5, chr-F: 0.331\ntestset: URL, BLEU: 9.3, chr-F: 0.372\ntestset: URL, BLEU: 8.3, chr-F: 0.437\ntestset: URL, BLEU: 13.5, chr-F: 0.410\ntestset: URL, BLEU: 2.3, chr-F: 0.008\ntestset: URL, BLEU: 83.6, chr-F: 0.905\ntestset: URL, BLEU: 7.6, chr-F: 0.214\ntestset: URL, BLEU: 31.8, chr-F: 0.540\ntestset: URL, BLEU: 31.3, chr-F: 0.464\ntestset: URL, BLEU: 11.7, chr-F: 0.427\ntestset: URL, BLEU: 0.1, chr-F: 0.000\ntestset: URL, BLEU: 0.6, chr-F: 0.067\ntestset: URL, BLEU: 8.5, chr-F: 0.323\ntestset: URL, BLEU: 8.5, chr-F: 0.320\ntestset: URL, BLEU: 24.5, chr-F: 0.498\ntestset: URL, BLEU: 22.4, chr-F: 0.451\ntestset: URL, BLEU: 3.8, chr-F: 0.169\ntestset: URL, BLEU: 0.2, chr-F: 0.123\ntestset: URL, BLEU: 1.1, chr-F: 0.014\ntestset: URL, BLEU: 0.6, chr-F: 0.109\ntestset: URL, BLEU: 1.8, chr-F: 0.149\ntestset: URL, BLEU: 11.3, chr-F: 0.365\ntestset: URL, BLEU: 0.5, chr-F: 0.004\ntestset: URL, BLEU: 34.4, chr-F: 0.501\ntestset: URL, BLEU: 37.6, chr-F: 0.598\ntestset: URL, BLEU: 0.2, chr-F: 0.010\ntestset: URL, BLEU: 0.2, chr-F: 0.096\ntestset: URL, BLEU: 36.3, chr-F: 0.577\ntestset: URL, BLEU: 0.9, chr-F: 0.180\ntestset: URL, BLEU: 9.8, chr-F: 0.524\ntestset: URL, BLEU: 6.3, chr-F: 0.288\ntestset: URL, BLEU: 5.3, chr-F: 0.273\ntestset: URL, BLEU: 0.2, chr-F: 0.007\ntestset: URL, BLEU: 3.0, chr-F: 0.230\ntestset: URL, BLEU: 0.2, chr-F: 0.053\ntestset: URL, BLEU: 20.2, chr-F: 0.513\ntestset: URL, BLEU: 6.4, chr-F: 0.301\ntestset: URL, BLEU: 44.7, chr-F: 0.624\ntestset: URL, BLEU: 0.8, chr-F: 0.098\ntestset: URL, BLEU: 2.9, chr-F: 0.143\ntestset: URL, BLEU: 0.6, chr-F: 0.124\ntestset: URL, BLEU: 22.7, chr-F: 0.500\ntestset: URL, BLEU: 31.6, chr-F: 0.570\ntestset: URL, BLEU: 0.5, chr-F: 0.085\ntestset: URL, BLEU: 0.1, chr-F: 0.078\ntestset: URL, BLEU: 0.9, chr-F: 0.137\ntestset: URL, BLEU: 2.7, chr-F: 0.255\ntestset: URL, BLEU: 0.4, chr-F: 0.084\ntestset: URL, BLEU: 1.9, chr-F: 0.050\ntestset: URL, BLEU: 1.3, chr-F: 0.102\ntestset: URL, BLEU: 1.4, chr-F: 0.169\ntestset: URL, BLEU: 7.8, chr-F: 0.329\ntestset: URL, BLEU: 27.0, chr-F: 0.530\ntestset: URL, BLEU: 0.1, chr-F: 0.009\ntestset: URL, BLEU: 9.8, chr-F: 0.434\ntestset: URL, BLEU: 22.2, chr-F: 0.465\ntestset: URL, BLEU: 4.8, chr-F: 0.155\ntestset: URL, BLEU: 0.2, chr-F: 0.007\ntestset: URL, BLEU: 1.7, chr-F: 0.143\ntestset: URL, BLEU: 1.5, chr-F: 0.083\ntestset: URL, BLEU: 30.3, chr-F: 0.514\ntestset: URL, BLEU: 1.6, chr-F: 0.104\ntestset: URL, BLEU: 0.7, chr-F: 0.049\ntestset: URL, BLEU: 0.6, chr-F: 0.064\ntestset: URL, BLEU: 5.4, chr-F: 0.317\ntestset: URL, BLEU: 0.3, chr-F: 0.074\ntestset: URL, BLEU: 12.8, chr-F: 0.313\ntestset: URL, BLEU: 0.8, chr-F: 0.063\ntestset: URL, BLEU: 13.2, chr-F: 0.290\ntestset: URL, BLEU: 12.1, chr-F: 0.416\ntestset: URL, BLEU: 27.1, chr-F: 0.533\ntestset: URL, BLEU: 6.0, chr-F: 0.359\ntestset: URL, BLEU: 16.0, chr-F: 0.274\ntestset: URL, BLEU: 36.7, chr-F: 0.603\ntestset: URL, BLEU: 32.3, chr-F: 0.573\ntestset: URL, BLEU: 0.6, chr-F: 0.198\ntestset: URL, BLEU: 39.0, chr-F: 0.447\ntestset: URL, BLEU: 1.1, chr-F: 0.109\ntestset: URL, BLEU: 42.7, chr-F: 0.614\ntestset: URL, BLEU: 0.6, chr-F: 0.118\ntestset: URL, BLEU: 12.4, chr-F: 0.294\ntestset: URL, BLEU: 5.0, chr-F: 0.404\ntestset: URL, BLEU: 9.9, chr-F: 0.326\ntestset: URL, BLEU: 4.7, chr-F: 0.326\ntestset: URL, BLEU: 0.7, chr-F: 0.100\ntestset: URL, BLEU: 5.5, chr-F: 0.304\ntestset: URL, BLEU: 2.2, chr-F: 0.456\ntestset: URL, BLEU: 1.5, chr-F: 0.197\ntestset: URL, BLEU: 0.0, chr-F: 0.032\ntestset: URL, BLEU: 0.3, chr-F: 0.061\ntestset: URL, BLEU: 8.3, chr-F: 0.219\ntestset: URL, BLEU: 32.7, chr-F: 0.619\ntestset: URL, BLEU: 1.4, chr-F: 0.136\ntestset: URL, BLEU: 9.6, chr-F: 0.465\ntestset: URL, BLEU: 9.4, chr-F: 0.383\ntestset: URL, BLEU: 24.1, chr-F: 0.542\ntestset: URL, BLEU: 8.9, chr-F: 0.398\ntestset: URL, BLEU: 10.4, chr-F: 0.249\ntestset: URL, BLEU: 0.2, chr-F: 0.098\ntestset: URL, BLEU: 6.5, chr-F: 0.212\ntestset: URL, BLEU: 2.1, chr-F: 0.266\ntestset: URL, BLEU: 24.3, chr-F: 0.479\ntestset: URL, BLEU: 4.4, chr-F: 0.274\ntestset: URL, BLEU: 8.6, chr-F: 0.344\ntestset: URL, BLEU: 6.9, chr-F: 0.343\ntestset: URL, BLEU: 1.0, chr-F: 0.094\ntestset: URL, BLEU: 23.2, chr-F: 0.420\ntestset: URL, BLEU: 0.3, chr-F: 0.086\ntestset: URL, BLEU: 11.4, chr-F: 0.415\ntestset: URL, BLEU: 8.4, chr-F: 0.218\ntestset: URL, BLEU: 11.5, chr-F: 0.252\ntestset: URL, BLEU: 0.1, chr-F: 0.007\ntestset: URL, BLEU: 19.5, chr-F: 0.552\ntestset: URL, BLEU: 4.0, chr-F: 0.256\ntestset: URL, BLEU: 8.8, chr-F: 0.247\ntestset: URL, BLEU: 21.8, chr-F: 0.192\ntestset: URL, BLEU: 34.3, chr-F: 0.655\ntestset: URL, BLEU: 0.5, chr-F: 0.080### System Info:\n\n\n* hf\\_name: eng-mul\n* source\\_languages: eng\n* target\\_languages: mul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'sjn\\_Latn', 'cat', 'nan', 'spa', 'ile\\_Latn', 'pap', 'mwl', 'uzb\\_Latn', 'mww', 'hil', 'lij', 'avk\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'bos\\_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi\\_Latn', 'awa', 'swg', 'zsm\\_Latn', 'zho\\_Hant', 'gcf\\_Latn', 'uzb\\_Cyrl', 'isl', 'lfn\\_Latn', 'shs\\_Latn', 'nov\\_Latn', 'bho', 'ltz', 'lzh', 'kur\\_Latn', 'sun', 'arg', 'pes\\_Thaa', 'sqi', 'uig\\_Arab', 'csb\\_Latn', 'fra', 'hat', 'liv\\_Latn', 'non\\_Latn', 'sco', 'cmn\\_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul\\_Latn', 'amh', 'lfn\\_Cyrl', 'eus', 'fkv\\_Latn', 'tur', 'pus', 'afr', 'brx\\_Latn', 'nya', 'acm', 'ota\\_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho\\_Hans', 'tmw\\_Latn', 'kjh', 'ota\\_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh\\_Hans', 'ara', 'tly\\_Latn', 'lug', 'brx', 'bul', 'bel', 'vol\\_Latn', 'kat', 'gan', 'got\\_Goth', 'vro', 'ext', 'afh\\_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif\\_Latn', 'cjy\\_Hant', 'bre', 'ceb', 'mah', 'nob\\_Hebr', 'crh\\_Latn', 'prg\\_Latn', 'khm', 'ang\\_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze\\_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie\\_Hani', 'arz', 'yue', 'kha', 'san\\_Deva', 'jbo\\_Latn', 'gos', 'hau\\_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig\\_Cyrl', 'fao', 'mnw', 'zho', 'orv\\_Cyrl', 'iba', 'bel\\_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc\\_Grek', 'tpw\\_Latn', 'oci', 'mfe', 'sna', 'kir\\_Cyrl', 'tat\\_Latn', 'gom', 'ido\\_Latn', 'sgs', 'pau', 'tgk\\_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp\\_Latn', 'wuu', 'dtp', 'jbo\\_Cyrl', 'tet', 'bod', 'yue\\_Hans', 'zlm\\_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz\\_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif\\_Latn', 'vie', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina\\_Latn', 'cjy\\_Hans', 'jdt\\_Cyrl', 'gsw', 'glv', 'khm\\_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd\\_Arab', 'arq', 'mri', 'kur\\_Arab', 'por', 'hin', 'shy\\_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue\\_Hant', 'kpv', 'tam', 'est', 'frm\\_Latn', 'hoc\\_Latn', 'bam\\_Latn', 'kek\\_Latn', 'ksh', 'tlh\\_Latn', 'ltg', 'pan\\_Guru', 'hnj\\_Latn', 'cor', 'gle', 'swe', 'lin', 'qya\\_Latn', 'kum', 'mad', 'cmn\\_Hant', 'fuv', 'nau', 'mon', 'akl\\_Latn', 'guj', 'kaz\\_Latn', 'wln', 'tuk\\_Latn', 'jav\\_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws\\_Latn', 'urd', 'stq', 'tat\\_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl\\_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld\\_Latn', 'tzl\\_Latn', 'mdf', 'ike\\_Latn', 'ces', 'ldn\\_Latn', 'egl', 'heb', 'vec', 'zul', 'max\\_Latn', 'pes\\_Latn', 'yid', 'mal', 'nds'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: mul\n* short\\_pair: en-mul\n* chrF2\\_score: 0.451\n* bleu: 22.4\n* brevity\\_penalty: 0.987\n* ref\\_len: 68724.0\n* src\\_name: English\n* tgt\\_name: Multiple languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: mul\n* prefer\\_old: False\n* long\\_pair: eng-mul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-ng
* source languages: en
* target languages: ng
* OPUS readme: [en-ng](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ng/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ng/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ng | 24.8 | 0.496 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ng | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ng",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ng #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ng
* source languages: en
* target languages: ng
* OPUS readme: en-ng
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.8, chr-F: 0.496
| [
"### opus-mt-en-ng\n\n\n* source languages: en\n* target languages: ng\n* OPUS readme: en-ng\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.8, chr-F: 0.496"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ng #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ng\n\n\n* source languages: en\n* target languages: ng\n* OPUS readme: en-ng\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.8, chr-F: 0.496"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ng #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ng\n\n\n* source languages: en\n* target languages: ng\n* OPUS readme: en-ng\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.8, chr-F: 0.496"
] |
translation | transformers |
### eng-nic
* source group: English
* target group: Niger-Kordofanian languages
* OPUS readme: [eng-nic](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-nic/README.md)
* model: transformer
* source language(s): eng
* target language(s): bam_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-bam.eng.bam | 6.2 | 0.029 |
| Tatoeba-test.eng-ewe.eng.ewe | 4.5 | 0.258 |
| Tatoeba-test.eng-ful.eng.ful | 0.5 | 0.073 |
| Tatoeba-test.eng-ibo.eng.ibo | 3.9 | 0.267 |
| Tatoeba-test.eng-kin.eng.kin | 6.4 | 0.475 |
| Tatoeba-test.eng-lin.eng.lin | 1.2 | 0.308 |
| Tatoeba-test.eng-lug.eng.lug | 3.9 | 0.405 |
| Tatoeba-test.eng.multi | 11.1 | 0.427 |
| Tatoeba-test.eng-nya.eng.nya | 14.0 | 0.622 |
| Tatoeba-test.eng-run.eng.run | 13.6 | 0.477 |
| Tatoeba-test.eng-sag.eng.sag | 5.5 | 0.199 |
| Tatoeba-test.eng-sna.eng.sna | 19.6 | 0.557 |
| Tatoeba-test.eng-swa.eng.swa | 1.8 | 0.163 |
| Tatoeba-test.eng-toi.eng.toi | 8.3 | 0.231 |
| Tatoeba-test.eng-tso.eng.tso | 50.0 | 0.789 |
| Tatoeba-test.eng-umb.eng.umb | 7.8 | 0.342 |
| Tatoeba-test.eng-wol.eng.wol | 6.7 | 0.143 |
| Tatoeba-test.eng-xho.eng.xho | 26.4 | 0.620 |
| Tatoeba-test.eng-yor.eng.yor | 15.5 | 0.342 |
| Tatoeba-test.eng-zul.eng.zul | 35.9 | 0.750 |
### System Info:
- hf_name: eng-nic
- source_languages: eng
- target_languages: nic
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-nic/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic']
- src_constituents: {'eng'}
- tgt_constituents: {'bam_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-nic/opus-2020-07-27.test.txt
- src_alpha3: eng
- tgt_alpha3: nic
- short_pair: en-nic
- chrF2_score: 0.42700000000000005
- bleu: 11.1
- brevity_penalty: 1.0
- ref_len: 10625.0
- src_name: English
- tgt_name: Niger-Kordofanian languages
- train_date: 2020-07-27
- src_alpha2: en
- tgt_alpha2: nic
- prefer_old: False
- long_pair: eng-nic
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "nic"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-nic | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"nic",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"nic"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #sn #rw #wo #ig #sg #ee #zu #lg #ts #ln #ny #yo #rn #xh #nic #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-nic
* source group: English
* target group: Niger-Kordofanian languages
* OPUS readme: eng-nic
* model: transformer
* source language(s): eng
* target language(s): bam\_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\_Latn tso umb wol xho yor zul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 6.2, chr-F: 0.029
testset: URL, BLEU: 4.5, chr-F: 0.258
testset: URL, BLEU: 0.5, chr-F: 0.073
testset: URL, BLEU: 3.9, chr-F: 0.267
testset: URL, BLEU: 6.4, chr-F: 0.475
testset: URL, BLEU: 1.2, chr-F: 0.308
testset: URL, BLEU: 3.9, chr-F: 0.405
testset: URL, BLEU: 11.1, chr-F: 0.427
testset: URL, BLEU: 14.0, chr-F: 0.622
testset: URL, BLEU: 13.6, chr-F: 0.477
testset: URL, BLEU: 5.5, chr-F: 0.199
testset: URL, BLEU: 19.6, chr-F: 0.557
testset: URL, BLEU: 1.8, chr-F: 0.163
testset: URL, BLEU: 8.3, chr-F: 0.231
testset: URL, BLEU: 50.0, chr-F: 0.789
testset: URL, BLEU: 7.8, chr-F: 0.342
testset: URL, BLEU: 6.7, chr-F: 0.143
testset: URL, BLEU: 26.4, chr-F: 0.620
testset: URL, BLEU: 15.5, chr-F: 0.342
testset: URL, BLEU: 35.9, chr-F: 0.750
### System Info:
* hf\_name: eng-nic
* source\_languages: eng
* target\_languages: nic
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic']
* src\_constituents: {'eng'}
* tgt\_constituents: {'bam\_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\_Latn', 'umb'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: nic
* short\_pair: en-nic
* chrF2\_score: 0.42700000000000005
* bleu: 11.1
* brevity\_penalty: 1.0
* ref\_len: 10625.0
* src\_name: English
* tgt\_name: Niger-Kordofanian languages
* train\_date: 2020-07-27
* src\_alpha2: en
* tgt\_alpha2: nic
* prefer\_old: False
* long\_pair: eng-nic
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-nic\n\n\n* source group: English\n* target group: Niger-Kordofanian languages\n* OPUS readme: eng-nic\n* model: transformer\n* source language(s): eng\n* target language(s): bam\\_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\\_Latn tso umb wol xho yor zul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.2, chr-F: 0.029\ntestset: URL, BLEU: 4.5, chr-F: 0.258\ntestset: URL, BLEU: 0.5, chr-F: 0.073\ntestset: URL, BLEU: 3.9, chr-F: 0.267\ntestset: URL, BLEU: 6.4, chr-F: 0.475\ntestset: URL, BLEU: 1.2, chr-F: 0.308\ntestset: URL, BLEU: 3.9, chr-F: 0.405\ntestset: URL, BLEU: 11.1, chr-F: 0.427\ntestset: URL, BLEU: 14.0, chr-F: 0.622\ntestset: URL, BLEU: 13.6, chr-F: 0.477\ntestset: URL, BLEU: 5.5, chr-F: 0.199\ntestset: URL, BLEU: 19.6, chr-F: 0.557\ntestset: URL, BLEU: 1.8, chr-F: 0.163\ntestset: URL, BLEU: 8.3, chr-F: 0.231\ntestset: URL, BLEU: 50.0, chr-F: 0.789\ntestset: URL, BLEU: 7.8, chr-F: 0.342\ntestset: URL, BLEU: 6.7, chr-F: 0.143\ntestset: URL, BLEU: 26.4, chr-F: 0.620\ntestset: URL, BLEU: 15.5, chr-F: 0.342\ntestset: URL, BLEU: 35.9, chr-F: 0.750",
"### System Info:\n\n\n* hf\\_name: eng-nic\n* source\\_languages: eng\n* target\\_languages: nic\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'bam\\_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\\_Latn', 'umb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: nic\n* short\\_pair: en-nic\n* chrF2\\_score: 0.42700000000000005\n* bleu: 11.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 10625.0\n* src\\_name: English\n* tgt\\_name: Niger-Kordofanian languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: en\n* tgt\\_alpha2: nic\n* prefer\\_old: False\n* long\\_pair: eng-nic\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sn #rw #wo #ig #sg #ee #zu #lg #ts #ln #ny #yo #rn #xh #nic #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-nic\n\n\n* source group: English\n* target group: Niger-Kordofanian languages\n* OPUS readme: eng-nic\n* model: transformer\n* source language(s): eng\n* target language(s): bam\\_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\\_Latn tso umb wol xho yor zul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.2, chr-F: 0.029\ntestset: URL, BLEU: 4.5, chr-F: 0.258\ntestset: URL, BLEU: 0.5, chr-F: 0.073\ntestset: URL, BLEU: 3.9, chr-F: 0.267\ntestset: URL, BLEU: 6.4, chr-F: 0.475\ntestset: URL, BLEU: 1.2, chr-F: 0.308\ntestset: URL, BLEU: 3.9, chr-F: 0.405\ntestset: URL, BLEU: 11.1, chr-F: 0.427\ntestset: URL, BLEU: 14.0, chr-F: 0.622\ntestset: URL, BLEU: 13.6, chr-F: 0.477\ntestset: URL, BLEU: 5.5, chr-F: 0.199\ntestset: URL, BLEU: 19.6, chr-F: 0.557\ntestset: URL, BLEU: 1.8, chr-F: 0.163\ntestset: URL, BLEU: 8.3, chr-F: 0.231\ntestset: URL, BLEU: 50.0, chr-F: 0.789\ntestset: URL, BLEU: 7.8, chr-F: 0.342\ntestset: URL, BLEU: 6.7, chr-F: 0.143\ntestset: URL, BLEU: 26.4, chr-F: 0.620\ntestset: URL, BLEU: 15.5, chr-F: 0.342\ntestset: URL, BLEU: 35.9, chr-F: 0.750",
"### System Info:\n\n\n* hf\\_name: eng-nic\n* source\\_languages: eng\n* target\\_languages: nic\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'bam\\_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\\_Latn', 'umb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: nic\n* short\\_pair: en-nic\n* chrF2\\_score: 0.42700000000000005\n* bleu: 11.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 10625.0\n* src\\_name: English\n* tgt\\_name: Niger-Kordofanian languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: en\n* tgt\\_alpha2: nic\n* prefer\\_old: False\n* long\\_pair: eng-nic\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
85,
631,
567
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sn #rw #wo #ig #sg #ee #zu #lg #ts #ln #ny #yo #rn #xh #nic #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-nic\n\n\n* source group: English\n* target group: Niger-Kordofanian languages\n* OPUS readme: eng-nic\n* model: transformer\n* source language(s): eng\n* target language(s): bam\\_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\\_Latn tso umb wol xho yor zul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 6.2, chr-F: 0.029\ntestset: URL, BLEU: 4.5, chr-F: 0.258\ntestset: URL, BLEU: 0.5, chr-F: 0.073\ntestset: URL, BLEU: 3.9, chr-F: 0.267\ntestset: URL, BLEU: 6.4, chr-F: 0.475\ntestset: URL, BLEU: 1.2, chr-F: 0.308\ntestset: URL, BLEU: 3.9, chr-F: 0.405\ntestset: URL, BLEU: 11.1, chr-F: 0.427\ntestset: URL, BLEU: 14.0, chr-F: 0.622\ntestset: URL, BLEU: 13.6, chr-F: 0.477\ntestset: URL, BLEU: 5.5, chr-F: 0.199\ntestset: URL, BLEU: 19.6, chr-F: 0.557\ntestset: URL, BLEU: 1.8, chr-F: 0.163\ntestset: URL, BLEU: 8.3, chr-F: 0.231\ntestset: URL, BLEU: 50.0, chr-F: 0.789\ntestset: URL, BLEU: 7.8, chr-F: 0.342\ntestset: URL, BLEU: 6.7, chr-F: 0.143\ntestset: URL, BLEU: 26.4, chr-F: 0.620\ntestset: URL, BLEU: 15.5, chr-F: 0.342\ntestset: URL, BLEU: 35.9, chr-F: 0.750### System Info:\n\n\n* hf\\_name: eng-nic\n* source\\_languages: eng\n* target\\_languages: nic\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'bam\\_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\\_Latn', 'umb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: nic\n* short\\_pair: en-nic\n* chrF2\\_score: 0.42700000000000005\n* bleu: 11.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 10625.0\n* src\\_name: English\n* tgt\\_name: Niger-Kordofanian languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: en\n* tgt\\_alpha2: nic\n* prefer\\_old: False\n* long\\_pair: eng-nic\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-niu
* source languages: en
* target languages: niu
* OPUS readme: [en-niu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-niu/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-niu/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-niu/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-niu/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.niu | 53.0 | 0.698 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-niu | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"niu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #niu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-niu
* source languages: en
* target languages: niu
* OPUS readme: en-niu
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 53.0, chr-F: 0.698
| [
"### opus-mt-en-niu\n\n\n* source languages: en\n* target languages: niu\n* OPUS readme: en-niu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.0, chr-F: 0.698"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #niu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-niu\n\n\n* source languages: en\n* target languages: niu\n* OPUS readme: en-niu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.0, chr-F: 0.698"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #niu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-niu\n\n\n* source languages: en\n* target languages: niu\n* OPUS readme: en-niu\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.0, chr-F: 0.698"
] |
translation | transformers |
### opus-mt-en-nl
* source languages: en
* target languages: nl
* OPUS readme: [en-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-nl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-04.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-nl/opus-2019-12-04.zip)
* test set translations: [opus-2019-12-04.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nl/opus-2019-12-04.test.txt)
* test set scores: [opus-2019-12-04.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nl/opus-2019-12-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.nl | 57.1 | 0.730 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-nl | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-nl
* source languages: en
* target languages: nl
* OPUS readme: en-nl
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 57.1, chr-F: 0.730
| [
"### opus-mt-en-nl\n\n\n* source languages: en\n* target languages: nl\n* OPUS readme: en-nl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.1, chr-F: 0.730"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-nl\n\n\n* source languages: en\n* target languages: nl\n* OPUS readme: en-nl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.1, chr-F: 0.730"
] | [
53,
105
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-nl\n\n\n* source languages: en\n* target languages: nl\n* OPUS readme: en-nl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.1, chr-F: 0.730"
] |
translation | transformers |
### opus-mt-en-nso
* source languages: en
* target languages: nso
* OPUS readme: [en-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-nso/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nso/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.nso | 52.2 | 0.684 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-nso | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"nso",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #nso #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-nso
* source languages: en
* target languages: nso
* OPUS readme: en-nso
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 52.2, chr-F: 0.684
| [
"### opus-mt-en-nso\n\n\n* source languages: en\n* target languages: nso\n* OPUS readme: en-nso\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.2, chr-F: 0.684"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #nso #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-nso\n\n\n* source languages: en\n* target languages: nso\n* OPUS readme: en-nso\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.2, chr-F: 0.684"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #nso #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-nso\n\n\n* source languages: en\n* target languages: nso\n* OPUS readme: en-nso\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.2, chr-F: 0.684"
] |
translation | transformers |
### opus-mt-en-ny
* source languages: en
* target languages: ny
* OPUS readme: [en-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ny/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ny/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ny/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ny/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ny | 31.4 | 0.570 |
| Tatoeba.en.ny | 26.8 | 0.645 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ny | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ny",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ny #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ny
* source languages: en
* target languages: ny
* OPUS readme: en-ny
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.4, chr-F: 0.570
testset: URL, BLEU: 26.8, chr-F: 0.645
| [
"### opus-mt-en-ny\n\n\n* source languages: en\n* target languages: ny\n* OPUS readme: en-ny\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.570\ntestset: URL, BLEU: 26.8, chr-F: 0.645"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ny #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ny\n\n\n* source languages: en\n* target languages: ny\n* OPUS readme: en-ny\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.570\ntestset: URL, BLEU: 26.8, chr-F: 0.645"
] | [
51,
128
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ny #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ny\n\n\n* source languages: en\n* target languages: ny\n* OPUS readme: en-ny\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.570\ntestset: URL, BLEU: 26.8, chr-F: 0.645"
] |
translation | transformers |
### opus-mt-en-nyk
* source languages: en
* target languages: nyk
* OPUS readme: [en-nyk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-nyk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-nyk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nyk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-nyk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.nyk | 26.6 | 0.511 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-nyk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"nyk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #nyk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-nyk
* source languages: en
* target languages: nyk
* OPUS readme: en-nyk
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 26.6, chr-F: 0.511
| [
"### opus-mt-en-nyk\n\n\n* source languages: en\n* target languages: nyk\n* OPUS readme: en-nyk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.6, chr-F: 0.511"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #nyk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-nyk\n\n\n* source languages: en\n* target languages: nyk\n* OPUS readme: en-nyk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.6, chr-F: 0.511"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #nyk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-nyk\n\n\n* source languages: en\n* target languages: nyk\n* OPUS readme: en-nyk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.6, chr-F: 0.511"
] |
translation | transformers |
### opus-mt-en-om
* source languages: en
* target languages: om
* OPUS readme: [en-om](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-om/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-om/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-om/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-om/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.om | 21.8 | 0.498 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-om | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"om",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #om #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-om
* source languages: en
* target languages: om
* OPUS readme: en-om
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.8, chr-F: 0.498
| [
"### opus-mt-en-om\n\n\n* source languages: en\n* target languages: om\n* OPUS readme: en-om\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.498"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #om #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-om\n\n\n* source languages: en\n* target languages: om\n* OPUS readme: en-om\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.498"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #om #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-om\n\n\n* source languages: en\n* target languages: om\n* OPUS readme: en-om\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.498"
] |
translation | transformers |
### opus-mt-en-pag
* source languages: en
* target languages: pag
* OPUS readme: [en-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pag/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pag/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.pag | 37.9 | 0.598 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-pag | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"pag",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #pag #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-pag
* source languages: en
* target languages: pag
* OPUS readme: en-pag
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.9, chr-F: 0.598
| [
"### opus-mt-en-pag\n\n\n* source languages: en\n* target languages: pag\n* OPUS readme: en-pag\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.9, chr-F: 0.598"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #pag #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-pag\n\n\n* source languages: en\n* target languages: pag\n* OPUS readme: en-pag\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.9, chr-F: 0.598"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #pag #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-pag\n\n\n* source languages: en\n* target languages: pag\n* OPUS readme: en-pag\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.9, chr-F: 0.598"
] |
translation | transformers |
### opus-mt-en-pap
* source languages: en
* target languages: pap
* OPUS readme: [en-pap](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pap/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pap/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pap/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pap/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.pap | 40.1 | 0.586 |
| Tatoeba.en.pap | 52.8 | 0.665 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-pap | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"pap",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #pap #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-pap
* source languages: en
* target languages: pap
* OPUS readme: en-pap
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.1, chr-F: 0.586
testset: URL, BLEU: 52.8, chr-F: 0.665
| [
"### opus-mt-en-pap\n\n\n* source languages: en\n* target languages: pap\n* OPUS readme: en-pap\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.586\ntestset: URL, BLEU: 52.8, chr-F: 0.665"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #pap #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-pap\n\n\n* source languages: en\n* target languages: pap\n* OPUS readme: en-pap\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.586\ntestset: URL, BLEU: 52.8, chr-F: 0.665"
] | [
52,
132
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #pap #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-pap\n\n\n* source languages: en\n* target languages: pap\n* OPUS readme: en-pap\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.586\ntestset: URL, BLEU: 52.8, chr-F: 0.665"
] |
translation | transformers |
### eng-phi
* source group: English
* target group: Philippine languages
* OPUS readme: [eng-phi](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-phi/README.md)
* model: transformer
* source language(s): eng
* target language(s): akl_Latn ceb hil ilo pag war
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-akl.eng.akl | 7.1 | 0.245 |
| Tatoeba-test.eng-ceb.eng.ceb | 10.5 | 0.435 |
| Tatoeba-test.eng-hil.eng.hil | 18.0 | 0.506 |
| Tatoeba-test.eng-ilo.eng.ilo | 33.4 | 0.590 |
| Tatoeba-test.eng.multi | 13.1 | 0.392 |
| Tatoeba-test.eng-pag.eng.pag | 19.4 | 0.481 |
| Tatoeba-test.eng-war.eng.war | 12.8 | 0.441 |
### System Info:
- hf_name: eng-phi
- source_languages: eng
- target_languages: phi
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-phi/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'phi']
- src_constituents: {'eng'}
- tgt_constituents: {'ilo', 'akl_Latn', 'war', 'hil', 'pag', 'ceb'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-phi/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: phi
- short_pair: en-phi
- chrF2_score: 0.392
- bleu: 13.1
- brevity_penalty: 1.0
- ref_len: 30022.0
- src_name: English
- tgt_name: Philippine languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: phi
- prefer_old: False
- long_pair: eng-phi
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "phi"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-phi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"phi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"phi"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #phi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-phi
* source group: English
* target group: Philippine languages
* OPUS readme: eng-phi
* model: transformer
* source language(s): eng
* target language(s): akl\_Latn ceb hil ilo pag war
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 7.1, chr-F: 0.245
testset: URL, BLEU: 10.5, chr-F: 0.435
testset: URL, BLEU: 18.0, chr-F: 0.506
testset: URL, BLEU: 33.4, chr-F: 0.590
testset: URL, BLEU: 13.1, chr-F: 0.392
testset: URL, BLEU: 19.4, chr-F: 0.481
testset: URL, BLEU: 12.8, chr-F: 0.441
### System Info:
* hf\_name: eng-phi
* source\_languages: eng
* target\_languages: phi
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'phi']
* src\_constituents: {'eng'}
* tgt\_constituents: {'ilo', 'akl\_Latn', 'war', 'hil', 'pag', 'ceb'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: phi
* short\_pair: en-phi
* chrF2\_score: 0.392
* bleu: 13.1
* brevity\_penalty: 1.0
* ref\_len: 30022.0
* src\_name: English
* tgt\_name: Philippine languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: phi
* prefer\_old: False
* long\_pair: eng-phi
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-phi\n\n\n* source group: English\n* target group: Philippine languages\n* OPUS readme: eng-phi\n* model: transformer\n* source language(s): eng\n* target language(s): akl\\_Latn ceb hil ilo pag war\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 7.1, chr-F: 0.245\ntestset: URL, BLEU: 10.5, chr-F: 0.435\ntestset: URL, BLEU: 18.0, chr-F: 0.506\ntestset: URL, BLEU: 33.4, chr-F: 0.590\ntestset: URL, BLEU: 13.1, chr-F: 0.392\ntestset: URL, BLEU: 19.4, chr-F: 0.481\ntestset: URL, BLEU: 12.8, chr-F: 0.441",
"### System Info:\n\n\n* hf\\_name: eng-phi\n* source\\_languages: eng\n* target\\_languages: phi\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'phi']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ilo', 'akl\\_Latn', 'war', 'hil', 'pag', 'ceb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: phi\n* short\\_pair: en-phi\n* chrF2\\_score: 0.392\n* bleu: 13.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 30022.0\n* src\\_name: English\n* tgt\\_name: Philippine languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: phi\n* prefer\\_old: False\n* long\\_pair: eng-phi\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #phi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-phi\n\n\n* source group: English\n* target group: Philippine languages\n* OPUS readme: eng-phi\n* model: transformer\n* source language(s): eng\n* target language(s): akl\\_Latn ceb hil ilo pag war\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 7.1, chr-F: 0.245\ntestset: URL, BLEU: 10.5, chr-F: 0.435\ntestset: URL, BLEU: 18.0, chr-F: 0.506\ntestset: URL, BLEU: 33.4, chr-F: 0.590\ntestset: URL, BLEU: 13.1, chr-F: 0.392\ntestset: URL, BLEU: 19.4, chr-F: 0.481\ntestset: URL, BLEU: 12.8, chr-F: 0.441",
"### System Info:\n\n\n* hf\\_name: eng-phi\n* source\\_languages: eng\n* target\\_languages: phi\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'phi']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ilo', 'akl\\_Latn', 'war', 'hil', 'pag', 'ceb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: phi\n* short\\_pair: en-phi\n* chrF2\\_score: 0.392\n* bleu: 13.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 30022.0\n* src\\_name: English\n* tgt\\_name: Philippine languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: phi\n* prefer\\_old: False\n* long\\_pair: eng-phi\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
304,
421
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #phi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-phi\n\n\n* source group: English\n* target group: Philippine languages\n* OPUS readme: eng-phi\n* model: transformer\n* source language(s): eng\n* target language(s): akl\\_Latn ceb hil ilo pag war\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 7.1, chr-F: 0.245\ntestset: URL, BLEU: 10.5, chr-F: 0.435\ntestset: URL, BLEU: 18.0, chr-F: 0.506\ntestset: URL, BLEU: 33.4, chr-F: 0.590\ntestset: URL, BLEU: 13.1, chr-F: 0.392\ntestset: URL, BLEU: 19.4, chr-F: 0.481\ntestset: URL, BLEU: 12.8, chr-F: 0.441### System Info:\n\n\n* hf\\_name: eng-phi\n* source\\_languages: eng\n* target\\_languages: phi\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'phi']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ilo', 'akl\\_Latn', 'war', 'hil', 'pag', 'ceb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: phi\n* short\\_pair: en-phi\n* chrF2\\_score: 0.392\n* bleu: 13.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 30022.0\n* src\\_name: English\n* tgt\\_name: Philippine languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: phi\n* prefer\\_old: False\n* long\\_pair: eng-phi\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-pis
* source languages: en
* target languages: pis
* OPUS readme: [en-pis](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pis/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pis/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pis/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pis/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.pis | 38.3 | 0.571 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-pis | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"pis",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #pis #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-pis
* source languages: en
* target languages: pis
* OPUS readme: en-pis
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.3, chr-F: 0.571
| [
"### opus-mt-en-pis\n\n\n* source languages: en\n* target languages: pis\n* OPUS readme: en-pis\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.3, chr-F: 0.571"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #pis #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-pis\n\n\n* source languages: en\n* target languages: pis\n* OPUS readme: en-pis\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.3, chr-F: 0.571"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #pis #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-pis\n\n\n* source languages: en\n* target languages: pis\n* OPUS readme: en-pis\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.3, chr-F: 0.571"
] |
translation | transformers |
### opus-mt-en-pon
* source languages: en
* target languages: pon
* OPUS readme: [en-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-pon/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pon/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-pon/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.pon | 32.4 | 0.542 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-pon | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"pon",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #pon #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-pon
* source languages: en
* target languages: pon
* OPUS readme: en-pon
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.4, chr-F: 0.542
| [
"### opus-mt-en-pon\n\n\n* source languages: en\n* target languages: pon\n* OPUS readme: en-pon\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.4, chr-F: 0.542"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #pon #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-pon\n\n\n* source languages: en\n* target languages: pon\n* OPUS readme: en-pon\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.4, chr-F: 0.542"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #pon #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-pon\n\n\n* source languages: en\n* target languages: pon\n* OPUS readme: en-pon\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.4, chr-F: 0.542"
] |
translation | transformers |
### eng-poz
* source group: English
* target group: Malayo-Polynesian languages
* OPUS readme: [eng-poz](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-poz/README.md)
* model: transformer
* source language(s): eng
* target language(s): akl_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav_Java lkt mad mah max_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw_Latn ton tvl war zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-akl.eng.akl | 1.3 | 0.086 |
| Tatoeba-test.eng-ceb.eng.ceb | 10.2 | 0.426 |
| Tatoeba-test.eng-cha.eng.cha | 1.9 | 0.196 |
| Tatoeba-test.eng-dtp.eng.dtp | 0.4 | 0.121 |
| Tatoeba-test.eng-fij.eng.fij | 31.0 | 0.463 |
| Tatoeba-test.eng-gil.eng.gil | 45.4 | 0.635 |
| Tatoeba-test.eng-haw.eng.haw | 0.6 | 0.104 |
| Tatoeba-test.eng-hil.eng.hil | 14.4 | 0.498 |
| Tatoeba-test.eng-iba.eng.iba | 17.4 | 0.414 |
| Tatoeba-test.eng-ilo.eng.ilo | 33.1 | 0.585 |
| Tatoeba-test.eng-jav.eng.jav | 6.5 | 0.309 |
| Tatoeba-test.eng-lkt.eng.lkt | 0.5 | 0.065 |
| Tatoeba-test.eng-mad.eng.mad | 1.7 | 0.156 |
| Tatoeba-test.eng-mah.eng.mah | 12.7 | 0.391 |
| Tatoeba-test.eng-mlg.eng.mlg | 30.3 | 0.504 |
| Tatoeba-test.eng-mri.eng.mri | 8.2 | 0.316 |
| Tatoeba-test.eng-msa.eng.msa | 30.4 | 0.561 |
| Tatoeba-test.eng.multi | 16.2 | 0.410 |
| Tatoeba-test.eng-nau.eng.nau | 0.6 | 0.087 |
| Tatoeba-test.eng-niu.eng.niu | 33.2 | 0.482 |
| Tatoeba-test.eng-pag.eng.pag | 19.4 | 0.555 |
| Tatoeba-test.eng-pau.eng.pau | 1.0 | 0.124 |
| Tatoeba-test.eng-rap.eng.rap | 1.4 | 0.090 |
| Tatoeba-test.eng-smo.eng.smo | 12.9 | 0.407 |
| Tatoeba-test.eng-sun.eng.sun | 15.5 | 0.364 |
| Tatoeba-test.eng-tah.eng.tah | 9.5 | 0.295 |
| Tatoeba-test.eng-tet.eng.tet | 1.2 | 0.146 |
| Tatoeba-test.eng-ton.eng.ton | 23.7 | 0.484 |
| Tatoeba-test.eng-tvl.eng.tvl | 32.5 | 0.549 |
| Tatoeba-test.eng-war.eng.war | 12.6 | 0.432 |
### System Info:
- hf_name: eng-poz
- source_languages: eng
- target_languages: poz
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-poz/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'poz']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-poz/opus-2020-07-27.test.txt
- src_alpha3: eng
- tgt_alpha3: poz
- short_pair: en-poz
- chrF2_score: 0.41
- bleu: 16.2
- brevity_penalty: 1.0
- ref_len: 66803.0
- src_name: English
- tgt_name: Malayo-Polynesian languages
- train_date: 2020-07-27
- src_alpha2: en
- tgt_alpha2: poz
- prefer_old: False
- long_pair: eng-poz
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "poz"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-poz | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"poz",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"poz"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #poz #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-poz
* source group: English
* target group: Malayo-Polynesian languages
* OPUS readme: eng-poz
* model: transformer
* source language(s): eng
* target language(s): akl\_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav\_Java lkt mad mah max\_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw\_Latn ton tvl war zlm\_Latn zsm\_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 1.3, chr-F: 0.086
testset: URL, BLEU: 10.2, chr-F: 0.426
testset: URL, BLEU: 1.9, chr-F: 0.196
testset: URL, BLEU: 0.4, chr-F: 0.121
testset: URL, BLEU: 31.0, chr-F: 0.463
testset: URL, BLEU: 45.4, chr-F: 0.635
testset: URL, BLEU: 0.6, chr-F: 0.104
testset: URL, BLEU: 14.4, chr-F: 0.498
testset: URL, BLEU: 17.4, chr-F: 0.414
testset: URL, BLEU: 33.1, chr-F: 0.585
testset: URL, BLEU: 6.5, chr-F: 0.309
testset: URL, BLEU: 0.5, chr-F: 0.065
testset: URL, BLEU: 1.7, chr-F: 0.156
testset: URL, BLEU: 12.7, chr-F: 0.391
testset: URL, BLEU: 30.3, chr-F: 0.504
testset: URL, BLEU: 8.2, chr-F: 0.316
testset: URL, BLEU: 30.4, chr-F: 0.561
testset: URL, BLEU: 16.2, chr-F: 0.410
testset: URL, BLEU: 0.6, chr-F: 0.087
testset: URL, BLEU: 33.2, chr-F: 0.482
testset: URL, BLEU: 19.4, chr-F: 0.555
testset: URL, BLEU: 1.0, chr-F: 0.124
testset: URL, BLEU: 1.4, chr-F: 0.090
testset: URL, BLEU: 12.9, chr-F: 0.407
testset: URL, BLEU: 15.5, chr-F: 0.364
testset: URL, BLEU: 9.5, chr-F: 0.295
testset: URL, BLEU: 1.2, chr-F: 0.146
testset: URL, BLEU: 23.7, chr-F: 0.484
testset: URL, BLEU: 32.5, chr-F: 0.549
testset: URL, BLEU: 12.6, chr-F: 0.432
### System Info:
* hf\_name: eng-poz
* source\_languages: eng
* target\_languages: poz
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'poz']
* src\_constituents: {'eng'}
* tgt\_constituents: set()
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: poz
* short\_pair: en-poz
* chrF2\_score: 0.41
* bleu: 16.2
* brevity\_penalty: 1.0
* ref\_len: 66803.0
* src\_name: English
* tgt\_name: Malayo-Polynesian languages
* train\_date: 2020-07-27
* src\_alpha2: en
* tgt\_alpha2: poz
* prefer\_old: False
* long\_pair: eng-poz
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-poz\n\n\n* source group: English\n* target group: Malayo-Polynesian languages\n* OPUS readme: eng-poz\n* model: transformer\n* source language(s): eng\n* target language(s): akl\\_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav\\_Java lkt mad mah max\\_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw\\_Latn ton tvl war zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 1.3, chr-F: 0.086\ntestset: URL, BLEU: 10.2, chr-F: 0.426\ntestset: URL, BLEU: 1.9, chr-F: 0.196\ntestset: URL, BLEU: 0.4, chr-F: 0.121\ntestset: URL, BLEU: 31.0, chr-F: 0.463\ntestset: URL, BLEU: 45.4, chr-F: 0.635\ntestset: URL, BLEU: 0.6, chr-F: 0.104\ntestset: URL, BLEU: 14.4, chr-F: 0.498\ntestset: URL, BLEU: 17.4, chr-F: 0.414\ntestset: URL, BLEU: 33.1, chr-F: 0.585\ntestset: URL, BLEU: 6.5, chr-F: 0.309\ntestset: URL, BLEU: 0.5, chr-F: 0.065\ntestset: URL, BLEU: 1.7, chr-F: 0.156\ntestset: URL, BLEU: 12.7, chr-F: 0.391\ntestset: URL, BLEU: 30.3, chr-F: 0.504\ntestset: URL, BLEU: 8.2, chr-F: 0.316\ntestset: URL, BLEU: 30.4, chr-F: 0.561\ntestset: URL, BLEU: 16.2, chr-F: 0.410\ntestset: URL, BLEU: 0.6, chr-F: 0.087\ntestset: URL, BLEU: 33.2, chr-F: 0.482\ntestset: URL, BLEU: 19.4, chr-F: 0.555\ntestset: URL, BLEU: 1.0, chr-F: 0.124\ntestset: URL, BLEU: 1.4, chr-F: 0.090\ntestset: URL, BLEU: 12.9, chr-F: 0.407\ntestset: URL, BLEU: 15.5, chr-F: 0.364\ntestset: URL, BLEU: 9.5, chr-F: 0.295\ntestset: URL, BLEU: 1.2, chr-F: 0.146\ntestset: URL, BLEU: 23.7, chr-F: 0.484\ntestset: URL, BLEU: 32.5, chr-F: 0.549\ntestset: URL, BLEU: 12.6, chr-F: 0.432",
"### System Info:\n\n\n* hf\\_name: eng-poz\n* source\\_languages: eng\n* target\\_languages: poz\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'poz']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: poz\n* short\\_pair: en-poz\n* chrF2\\_score: 0.41\n* bleu: 16.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 66803.0\n* src\\_name: English\n* tgt\\_name: Malayo-Polynesian languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: en\n* tgt\\_alpha2: poz\n* prefer\\_old: False\n* long\\_pair: eng-poz\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #poz #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-poz\n\n\n* source group: English\n* target group: Malayo-Polynesian languages\n* OPUS readme: eng-poz\n* model: transformer\n* source language(s): eng\n* target language(s): akl\\_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav\\_Java lkt mad mah max\\_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw\\_Latn ton tvl war zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 1.3, chr-F: 0.086\ntestset: URL, BLEU: 10.2, chr-F: 0.426\ntestset: URL, BLEU: 1.9, chr-F: 0.196\ntestset: URL, BLEU: 0.4, chr-F: 0.121\ntestset: URL, BLEU: 31.0, chr-F: 0.463\ntestset: URL, BLEU: 45.4, chr-F: 0.635\ntestset: URL, BLEU: 0.6, chr-F: 0.104\ntestset: URL, BLEU: 14.4, chr-F: 0.498\ntestset: URL, BLEU: 17.4, chr-F: 0.414\ntestset: URL, BLEU: 33.1, chr-F: 0.585\ntestset: URL, BLEU: 6.5, chr-F: 0.309\ntestset: URL, BLEU: 0.5, chr-F: 0.065\ntestset: URL, BLEU: 1.7, chr-F: 0.156\ntestset: URL, BLEU: 12.7, chr-F: 0.391\ntestset: URL, BLEU: 30.3, chr-F: 0.504\ntestset: URL, BLEU: 8.2, chr-F: 0.316\ntestset: URL, BLEU: 30.4, chr-F: 0.561\ntestset: URL, BLEU: 16.2, chr-F: 0.410\ntestset: URL, BLEU: 0.6, chr-F: 0.087\ntestset: URL, BLEU: 33.2, chr-F: 0.482\ntestset: URL, BLEU: 19.4, chr-F: 0.555\ntestset: URL, BLEU: 1.0, chr-F: 0.124\ntestset: URL, BLEU: 1.4, chr-F: 0.090\ntestset: URL, BLEU: 12.9, chr-F: 0.407\ntestset: URL, BLEU: 15.5, chr-F: 0.364\ntestset: URL, BLEU: 9.5, chr-F: 0.295\ntestset: URL, BLEU: 1.2, chr-F: 0.146\ntestset: URL, BLEU: 23.7, chr-F: 0.484\ntestset: URL, BLEU: 32.5, chr-F: 0.549\ntestset: URL, BLEU: 12.6, chr-F: 0.432",
"### System Info:\n\n\n* hf\\_name: eng-poz\n* source\\_languages: eng\n* target\\_languages: poz\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'poz']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: poz\n* short\\_pair: en-poz\n* chrF2\\_score: 0.41\n* bleu: 16.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 66803.0\n* src\\_name: English\n* tgt\\_name: Malayo-Polynesian languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: en\n* tgt\\_alpha2: poz\n* prefer\\_old: False\n* long\\_pair: eng-poz\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
902,
400
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #poz #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-poz\n\n\n* source group: English\n* target group: Malayo-Polynesian languages\n* OPUS readme: eng-poz\n* model: transformer\n* source language(s): eng\n* target language(s): akl\\_Latn ceb cha dtp fij gil haw hil iba ilo ind jav jav\\_Java lkt mad mah max\\_Latn min mlg mri nau niu pag pau rap smo sun tah tet tmw\\_Latn ton tvl war zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 1.3, chr-F: 0.086\ntestset: URL, BLEU: 10.2, chr-F: 0.426\ntestset: URL, BLEU: 1.9, chr-F: 0.196\ntestset: URL, BLEU: 0.4, chr-F: 0.121\ntestset: URL, BLEU: 31.0, chr-F: 0.463\ntestset: URL, BLEU: 45.4, chr-F: 0.635\ntestset: URL, BLEU: 0.6, chr-F: 0.104\ntestset: URL, BLEU: 14.4, chr-F: 0.498\ntestset: URL, BLEU: 17.4, chr-F: 0.414\ntestset: URL, BLEU: 33.1, chr-F: 0.585\ntestset: URL, BLEU: 6.5, chr-F: 0.309\ntestset: URL, BLEU: 0.5, chr-F: 0.065\ntestset: URL, BLEU: 1.7, chr-F: 0.156\ntestset: URL, BLEU: 12.7, chr-F: 0.391\ntestset: URL, BLEU: 30.3, chr-F: 0.504\ntestset: URL, BLEU: 8.2, chr-F: 0.316\ntestset: URL, BLEU: 30.4, chr-F: 0.561\ntestset: URL, BLEU: 16.2, chr-F: 0.410\ntestset: URL, BLEU: 0.6, chr-F: 0.087\ntestset: URL, BLEU: 33.2, chr-F: 0.482\ntestset: URL, BLEU: 19.4, chr-F: 0.555\ntestset: URL, BLEU: 1.0, chr-F: 0.124\ntestset: URL, BLEU: 1.4, chr-F: 0.090\ntestset: URL, BLEU: 12.9, chr-F: 0.407\ntestset: URL, BLEU: 15.5, chr-F: 0.364\ntestset: URL, BLEU: 9.5, chr-F: 0.295\ntestset: URL, BLEU: 1.2, chr-F: 0.146\ntestset: URL, BLEU: 23.7, chr-F: 0.484\ntestset: URL, BLEU: 32.5, chr-F: 0.549\ntestset: URL, BLEU: 12.6, chr-F: 0.432### System Info:\n\n\n* hf\\_name: eng-poz\n* source\\_languages: eng\n* target\\_languages: poz\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'poz']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: poz\n* short\\_pair: en-poz\n* chrF2\\_score: 0.41\n* bleu: 16.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 66803.0\n* src\\_name: English\n* tgt\\_name: Malayo-Polynesian languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: en\n* tgt\\_alpha2: poz\n* prefer\\_old: False\n* long\\_pair: eng-poz\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-pqe
* source group: English
* target group: Eastern Malayo-Polynesian languages
* OPUS readme: [eng-pqe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-pqe/README.md)
* model: transformer
* source language(s): eng
* target language(s): fij gil haw lkt mah mri nau niu rap smo tah ton tvl
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqe/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqe/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqe/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-fij.eng.fij | 22.1 | 0.396 |
| Tatoeba-test.eng-gil.eng.gil | 41.9 | 0.673 |
| Tatoeba-test.eng-haw.eng.haw | 0.6 | 0.114 |
| Tatoeba-test.eng-lkt.eng.lkt | 0.5 | 0.075 |
| Tatoeba-test.eng-mah.eng.mah | 9.7 | 0.386 |
| Tatoeba-test.eng-mri.eng.mri | 7.7 | 0.301 |
| Tatoeba-test.eng.multi | 11.3 | 0.306 |
| Tatoeba-test.eng-nau.eng.nau | 0.5 | 0.071 |
| Tatoeba-test.eng-niu.eng.niu | 42.5 | 0.560 |
| Tatoeba-test.eng-rap.eng.rap | 3.3 | 0.122 |
| Tatoeba-test.eng-smo.eng.smo | 27.0 | 0.462 |
| Tatoeba-test.eng-tah.eng.tah | 11.3 | 0.307 |
| Tatoeba-test.eng-ton.eng.ton | 27.0 | 0.528 |
| Tatoeba-test.eng-tvl.eng.tvl | 29.3 | 0.513 |
### System Info:
- hf_name: eng-pqe
- source_languages: eng
- target_languages: pqe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-pqe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe']
- src_constituents: {'eng'}
- tgt_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqe/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqe/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: pqe
- short_pair: en-pqe
- chrF2_score: 0.306
- bleu: 11.3
- brevity_penalty: 1.0
- ref_len: 5786.0
- src_name: English
- tgt_name: Eastern Malayo-Polynesian languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: pqe
- prefer_old: False
- long_pair: eng-pqe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "fj", "mi", "ty", "to", "na", "sm", "mh", "pqe"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-pqe | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"fj",
"mi",
"ty",
"to",
"na",
"sm",
"mh",
"pqe",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"fj",
"mi",
"ty",
"to",
"na",
"sm",
"mh",
"pqe"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #fj #mi #ty #to #na #sm #mh #pqe #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-pqe
* source group: English
* target group: Eastern Malayo-Polynesian languages
* OPUS readme: eng-pqe
* model: transformer
* source language(s): eng
* target language(s): fij gil haw lkt mah mri nau niu rap smo tah ton tvl
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 22.1, chr-F: 0.396
testset: URL, BLEU: 41.9, chr-F: 0.673
testset: URL, BLEU: 0.6, chr-F: 0.114
testset: URL, BLEU: 0.5, chr-F: 0.075
testset: URL, BLEU: 9.7, chr-F: 0.386
testset: URL, BLEU: 7.7, chr-F: 0.301
testset: URL, BLEU: 11.3, chr-F: 0.306
testset: URL, BLEU: 0.5, chr-F: 0.071
testset: URL, BLEU: 42.5, chr-F: 0.560
testset: URL, BLEU: 3.3, chr-F: 0.122
testset: URL, BLEU: 27.0, chr-F: 0.462
testset: URL, BLEU: 11.3, chr-F: 0.307
testset: URL, BLEU: 27.0, chr-F: 0.528
testset: URL, BLEU: 29.3, chr-F: 0.513
### System Info:
* hf\_name: eng-pqe
* source\_languages: eng
* target\_languages: pqe
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe']
* src\_constituents: {'eng'}
* tgt\_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: pqe
* short\_pair: en-pqe
* chrF2\_score: 0.306
* bleu: 11.3
* brevity\_penalty: 1.0
* ref\_len: 5786.0
* src\_name: English
* tgt\_name: Eastern Malayo-Polynesian languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: pqe
* prefer\_old: False
* long\_pair: eng-pqe
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-pqe\n\n\n* source group: English\n* target group: Eastern Malayo-Polynesian languages\n* OPUS readme: eng-pqe\n* model: transformer\n* source language(s): eng\n* target language(s): fij gil haw lkt mah mri nau niu rap smo tah ton tvl\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.396\ntestset: URL, BLEU: 41.9, chr-F: 0.673\ntestset: URL, BLEU: 0.6, chr-F: 0.114\ntestset: URL, BLEU: 0.5, chr-F: 0.075\ntestset: URL, BLEU: 9.7, chr-F: 0.386\ntestset: URL, BLEU: 7.7, chr-F: 0.301\ntestset: URL, BLEU: 11.3, chr-F: 0.306\ntestset: URL, BLEU: 0.5, chr-F: 0.071\ntestset: URL, BLEU: 42.5, chr-F: 0.560\ntestset: URL, BLEU: 3.3, chr-F: 0.122\ntestset: URL, BLEU: 27.0, chr-F: 0.462\ntestset: URL, BLEU: 11.3, chr-F: 0.307\ntestset: URL, BLEU: 27.0, chr-F: 0.528\ntestset: URL, BLEU: 29.3, chr-F: 0.513",
"### System Info:\n\n\n* hf\\_name: eng-pqe\n* source\\_languages: eng\n* target\\_languages: pqe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: pqe\n* short\\_pair: en-pqe\n* chrF2\\_score: 0.306\n* bleu: 11.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 5786.0\n* src\\_name: English\n* tgt\\_name: Eastern Malayo-Polynesian languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: pqe\n* prefer\\_old: False\n* long\\_pair: eng-pqe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #fj #mi #ty #to #na #sm #mh #pqe #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-pqe\n\n\n* source group: English\n* target group: Eastern Malayo-Polynesian languages\n* OPUS readme: eng-pqe\n* model: transformer\n* source language(s): eng\n* target language(s): fij gil haw lkt mah mri nau niu rap smo tah ton tvl\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.396\ntestset: URL, BLEU: 41.9, chr-F: 0.673\ntestset: URL, BLEU: 0.6, chr-F: 0.114\ntestset: URL, BLEU: 0.5, chr-F: 0.075\ntestset: URL, BLEU: 9.7, chr-F: 0.386\ntestset: URL, BLEU: 7.7, chr-F: 0.301\ntestset: URL, BLEU: 11.3, chr-F: 0.306\ntestset: URL, BLEU: 0.5, chr-F: 0.071\ntestset: URL, BLEU: 42.5, chr-F: 0.560\ntestset: URL, BLEU: 3.3, chr-F: 0.122\ntestset: URL, BLEU: 27.0, chr-F: 0.462\ntestset: URL, BLEU: 11.3, chr-F: 0.307\ntestset: URL, BLEU: 27.0, chr-F: 0.528\ntestset: URL, BLEU: 29.3, chr-F: 0.513",
"### System Info:\n\n\n* hf\\_name: eng-pqe\n* source\\_languages: eng\n* target\\_languages: pqe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: pqe\n* short\\_pair: en-pqe\n* chrF2\\_score: 0.306\n* bleu: 11.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 5786.0\n* src\\_name: English\n* tgt\\_name: Eastern Malayo-Polynesian languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: pqe\n* prefer\\_old: False\n* long\\_pair: eng-pqe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
69,
478,
491
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #fj #mi #ty #to #na #sm #mh #pqe #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-pqe\n\n\n* source group: English\n* target group: Eastern Malayo-Polynesian languages\n* OPUS readme: eng-pqe\n* model: transformer\n* source language(s): eng\n* target language(s): fij gil haw lkt mah mri nau niu rap smo tah ton tvl\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.396\ntestset: URL, BLEU: 41.9, chr-F: 0.673\ntestset: URL, BLEU: 0.6, chr-F: 0.114\ntestset: URL, BLEU: 0.5, chr-F: 0.075\ntestset: URL, BLEU: 9.7, chr-F: 0.386\ntestset: URL, BLEU: 7.7, chr-F: 0.301\ntestset: URL, BLEU: 11.3, chr-F: 0.306\ntestset: URL, BLEU: 0.5, chr-F: 0.071\ntestset: URL, BLEU: 42.5, chr-F: 0.560\ntestset: URL, BLEU: 3.3, chr-F: 0.122\ntestset: URL, BLEU: 27.0, chr-F: 0.462\ntestset: URL, BLEU: 11.3, chr-F: 0.307\ntestset: URL, BLEU: 27.0, chr-F: 0.528\ntestset: URL, BLEU: 29.3, chr-F: 0.513### System Info:\n\n\n* hf\\_name: eng-pqe\n* source\\_languages: eng\n* target\\_languages: pqe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'fj', 'mi', 'ty', 'to', 'na', 'sm', 'mh', 'pqe']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'haw', 'gil', 'rap', 'fij', 'tvl', 'mri', 'tah', 'niu', 'ton', 'nau', 'smo', 'mah'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: pqe\n* short\\_pair: en-pqe\n* chrF2\\_score: 0.306\n* bleu: 11.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 5786.0\n* src\\_name: English\n* tgt\\_name: Eastern Malayo-Polynesian languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: pqe\n* prefer\\_old: False\n* long\\_pair: eng-pqe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-pqw
* source group: English
* target group: Western Malayo-Polynesian languages
* OPUS readme: [eng-pqw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-pqw/README.md)
* model: transformer
* source language(s): eng
* target language(s): akl_Latn ceb cha dtp hil iba ilo ind jav jav_Java mad max_Latn min mlg pag pau sun tmw_Latn war zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-akl.eng.akl | 3.0 | 0.143 |
| Tatoeba-test.eng-ceb.eng.ceb | 11.4 | 0.432 |
| Tatoeba-test.eng-cha.eng.cha | 1.4 | 0.189 |
| Tatoeba-test.eng-dtp.eng.dtp | 0.6 | 0.139 |
| Tatoeba-test.eng-hil.eng.hil | 17.7 | 0.525 |
| Tatoeba-test.eng-iba.eng.iba | 14.6 | 0.365 |
| Tatoeba-test.eng-ilo.eng.ilo | 34.0 | 0.590 |
| Tatoeba-test.eng-jav.eng.jav | 6.2 | 0.299 |
| Tatoeba-test.eng-mad.eng.mad | 2.6 | 0.154 |
| Tatoeba-test.eng-mlg.eng.mlg | 34.3 | 0.518 |
| Tatoeba-test.eng-msa.eng.msa | 31.1 | 0.561 |
| Tatoeba-test.eng.multi | 17.5 | 0.422 |
| Tatoeba-test.eng-pag.eng.pag | 19.8 | 0.507 |
| Tatoeba-test.eng-pau.eng.pau | 1.2 | 0.129 |
| Tatoeba-test.eng-sun.eng.sun | 30.3 | 0.418 |
| Tatoeba-test.eng-war.eng.war | 12.6 | 0.439 |
### System Info:
- hf_name: eng-pqw
- source_languages: eng
- target_languages: pqw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-pqw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'pqw']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-pqw/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: pqw
- short_pair: en-pqw
- chrF2_score: 0.42200000000000004
- bleu: 17.5
- brevity_penalty: 1.0
- ref_len: 66758.0
- src_name: English
- tgt_name: Western Malayo-Polynesian languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: pqw
- prefer_old: False
- long_pair: eng-pqw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "pqw"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-pqw | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"pqw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"pqw"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #pqw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-pqw
* source group: English
* target group: Western Malayo-Polynesian languages
* OPUS readme: eng-pqw
* model: transformer
* source language(s): eng
* target language(s): akl\_Latn ceb cha dtp hil iba ilo ind jav jav\_Java mad max\_Latn min mlg pag pau sun tmw\_Latn war zlm\_Latn zsm\_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 3.0, chr-F: 0.143
testset: URL, BLEU: 11.4, chr-F: 0.432
testset: URL, BLEU: 1.4, chr-F: 0.189
testset: URL, BLEU: 0.6, chr-F: 0.139
testset: URL, BLEU: 17.7, chr-F: 0.525
testset: URL, BLEU: 14.6, chr-F: 0.365
testset: URL, BLEU: 34.0, chr-F: 0.590
testset: URL, BLEU: 6.2, chr-F: 0.299
testset: URL, BLEU: 2.6, chr-F: 0.154
testset: URL, BLEU: 34.3, chr-F: 0.518
testset: URL, BLEU: 31.1, chr-F: 0.561
testset: URL, BLEU: 17.5, chr-F: 0.422
testset: URL, BLEU: 19.8, chr-F: 0.507
testset: URL, BLEU: 1.2, chr-F: 0.129
testset: URL, BLEU: 30.3, chr-F: 0.418
testset: URL, BLEU: 12.6, chr-F: 0.439
### System Info:
* hf\_name: eng-pqw
* source\_languages: eng
* target\_languages: pqw
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'pqw']
* src\_constituents: {'eng'}
* tgt\_constituents: set()
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: pqw
* short\_pair: en-pqw
* chrF2\_score: 0.42200000000000004
* bleu: 17.5
* brevity\_penalty: 1.0
* ref\_len: 66758.0
* src\_name: English
* tgt\_name: Western Malayo-Polynesian languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: pqw
* prefer\_old: False
* long\_pair: eng-pqw
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-pqw\n\n\n* source group: English\n* target group: Western Malayo-Polynesian languages\n* OPUS readme: eng-pqw\n* model: transformer\n* source language(s): eng\n* target language(s): akl\\_Latn ceb cha dtp hil iba ilo ind jav jav\\_Java mad max\\_Latn min mlg pag pau sun tmw\\_Latn war zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 3.0, chr-F: 0.143\ntestset: URL, BLEU: 11.4, chr-F: 0.432\ntestset: URL, BLEU: 1.4, chr-F: 0.189\ntestset: URL, BLEU: 0.6, chr-F: 0.139\ntestset: URL, BLEU: 17.7, chr-F: 0.525\ntestset: URL, BLEU: 14.6, chr-F: 0.365\ntestset: URL, BLEU: 34.0, chr-F: 0.590\ntestset: URL, BLEU: 6.2, chr-F: 0.299\ntestset: URL, BLEU: 2.6, chr-F: 0.154\ntestset: URL, BLEU: 34.3, chr-F: 0.518\ntestset: URL, BLEU: 31.1, chr-F: 0.561\ntestset: URL, BLEU: 17.5, chr-F: 0.422\ntestset: URL, BLEU: 19.8, chr-F: 0.507\ntestset: URL, BLEU: 1.2, chr-F: 0.129\ntestset: URL, BLEU: 30.3, chr-F: 0.418\ntestset: URL, BLEU: 12.6, chr-F: 0.439",
"### System Info:\n\n\n* hf\\_name: eng-pqw\n* source\\_languages: eng\n* target\\_languages: pqw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'pqw']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: pqw\n* short\\_pair: en-pqw\n* chrF2\\_score: 0.42200000000000004\n* bleu: 17.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 66758.0\n* src\\_name: English\n* tgt\\_name: Western Malayo-Polynesian languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: pqw\n* prefer\\_old: False\n* long\\_pair: eng-pqw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #pqw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-pqw\n\n\n* source group: English\n* target group: Western Malayo-Polynesian languages\n* OPUS readme: eng-pqw\n* model: transformer\n* source language(s): eng\n* target language(s): akl\\_Latn ceb cha dtp hil iba ilo ind jav jav\\_Java mad max\\_Latn min mlg pag pau sun tmw\\_Latn war zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 3.0, chr-F: 0.143\ntestset: URL, BLEU: 11.4, chr-F: 0.432\ntestset: URL, BLEU: 1.4, chr-F: 0.189\ntestset: URL, BLEU: 0.6, chr-F: 0.139\ntestset: URL, BLEU: 17.7, chr-F: 0.525\ntestset: URL, BLEU: 14.6, chr-F: 0.365\ntestset: URL, BLEU: 34.0, chr-F: 0.590\ntestset: URL, BLEU: 6.2, chr-F: 0.299\ntestset: URL, BLEU: 2.6, chr-F: 0.154\ntestset: URL, BLEU: 34.3, chr-F: 0.518\ntestset: URL, BLEU: 31.1, chr-F: 0.561\ntestset: URL, BLEU: 17.5, chr-F: 0.422\ntestset: URL, BLEU: 19.8, chr-F: 0.507\ntestset: URL, BLEU: 1.2, chr-F: 0.129\ntestset: URL, BLEU: 30.3, chr-F: 0.418\ntestset: URL, BLEU: 12.6, chr-F: 0.439",
"### System Info:\n\n\n* hf\\_name: eng-pqw\n* source\\_languages: eng\n* target\\_languages: pqw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'pqw']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: pqw\n* short\\_pair: en-pqw\n* chrF2\\_score: 0.42200000000000004\n* bleu: 17.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 66758.0\n* src\\_name: English\n* tgt\\_name: Western Malayo-Polynesian languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: pqw\n* prefer\\_old: False\n* long\\_pair: eng-pqw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
53,
561,
416
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #pqw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-pqw\n\n\n* source group: English\n* target group: Western Malayo-Polynesian languages\n* OPUS readme: eng-pqw\n* model: transformer\n* source language(s): eng\n* target language(s): akl\\_Latn ceb cha dtp hil iba ilo ind jav jav\\_Java mad max\\_Latn min mlg pag pau sun tmw\\_Latn war zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 3.0, chr-F: 0.143\ntestset: URL, BLEU: 11.4, chr-F: 0.432\ntestset: URL, BLEU: 1.4, chr-F: 0.189\ntestset: URL, BLEU: 0.6, chr-F: 0.139\ntestset: URL, BLEU: 17.7, chr-F: 0.525\ntestset: URL, BLEU: 14.6, chr-F: 0.365\ntestset: URL, BLEU: 34.0, chr-F: 0.590\ntestset: URL, BLEU: 6.2, chr-F: 0.299\ntestset: URL, BLEU: 2.6, chr-F: 0.154\ntestset: URL, BLEU: 34.3, chr-F: 0.518\ntestset: URL, BLEU: 31.1, chr-F: 0.561\ntestset: URL, BLEU: 17.5, chr-F: 0.422\ntestset: URL, BLEU: 19.8, chr-F: 0.507\ntestset: URL, BLEU: 1.2, chr-F: 0.129\ntestset: URL, BLEU: 30.3, chr-F: 0.418\ntestset: URL, BLEU: 12.6, chr-F: 0.439### System Info:\n\n\n* hf\\_name: eng-pqw\n* source\\_languages: eng\n* target\\_languages: pqw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'pqw']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: pqw\n* short\\_pair: en-pqw\n* chrF2\\_score: 0.42200000000000004\n* bleu: 17.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 66758.0\n* src\\_name: English\n* tgt\\_name: Western Malayo-Polynesian languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: pqw\n* prefer\\_old: False\n* long\\_pair: eng-pqw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-run
* source group: English
* target group: Rundi
* OPUS readme: [eng-run](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-run/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): run
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-run/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-run/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-run/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.run | 10.4 | 0.436 |
### System Info:
- hf_name: eng-run
- source_languages: eng
- target_languages: run
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-run/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'rn']
- src_constituents: {'eng'}
- tgt_constituents: {'run'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-run/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-run/opus-2020-06-16.test.txt
- src_alpha3: eng
- tgt_alpha3: run
- short_pair: en-rn
- chrF2_score: 0.436
- bleu: 10.4
- brevity_penalty: 1.0
- ref_len: 6710.0
- src_name: English
- tgt_name: Rundi
- train_date: 2020-06-16
- src_alpha2: en
- tgt_alpha2: rn
- prefer_old: False
- long_pair: eng-run
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "rn"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-rn | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"rn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"rn"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #rn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-run
* source group: English
* target group: Rundi
* OPUS readme: eng-run
* model: transformer-align
* source language(s): eng
* target language(s): run
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 10.4, chr-F: 0.436
### System Info:
* hf\_name: eng-run
* source\_languages: eng
* target\_languages: run
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'rn']
* src\_constituents: {'eng'}
* tgt\_constituents: {'run'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: run
* short\_pair: en-rn
* chrF2\_score: 0.436
* bleu: 10.4
* brevity\_penalty: 1.0
* ref\_len: 6710.0
* src\_name: English
* tgt\_name: Rundi
* train\_date: 2020-06-16
* src\_alpha2: en
* tgt\_alpha2: rn
* prefer\_old: False
* long\_pair: eng-run
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-run\n\n\n* source group: English\n* target group: Rundi\n* OPUS readme: eng-run\n* model: transformer-align\n* source language(s): eng\n* target language(s): run\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.4, chr-F: 0.436",
"### System Info:\n\n\n* hf\\_name: eng-run\n* source\\_languages: eng\n* target\\_languages: run\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'rn']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'run'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: run\n* short\\_pair: en-rn\n* chrF2\\_score: 0.436\n* bleu: 10.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 6710.0\n* src\\_name: English\n* tgt\\_name: Rundi\n* train\\_date: 2020-06-16\n* src\\_alpha2: en\n* tgt\\_alpha2: rn\n* prefer\\_old: False\n* long\\_pair: eng-run\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #rn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-run\n\n\n* source group: English\n* target group: Rundi\n* OPUS readme: eng-run\n* model: transformer-align\n* source language(s): eng\n* target language(s): run\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.4, chr-F: 0.436",
"### System Info:\n\n\n* hf\\_name: eng-run\n* source\\_languages: eng\n* target\\_languages: run\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'rn']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'run'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: run\n* short\\_pair: en-rn\n* chrF2\\_score: 0.436\n* bleu: 10.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 6710.0\n* src\\_name: English\n* tgt\\_name: Rundi\n* train\\_date: 2020-06-16\n* src\\_alpha2: en\n* tgt\\_alpha2: rn\n* prefer\\_old: False\n* long\\_pair: eng-run\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
132,
391
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #rn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-run\n\n\n* source group: English\n* target group: Rundi\n* OPUS readme: eng-run\n* model: transformer-align\n* source language(s): eng\n* target language(s): run\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.4, chr-F: 0.436### System Info:\n\n\n* hf\\_name: eng-run\n* source\\_languages: eng\n* target\\_languages: run\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'rn']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'run'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: run\n* short\\_pair: en-rn\n* chrF2\\_score: 0.436\n* bleu: 10.4\n* brevity\\_penalty: 1.0\n* ref\\_len: 6710.0\n* src\\_name: English\n* tgt\\_name: Rundi\n* train\\_date: 2020-06-16\n* src\\_alpha2: en\n* tgt\\_alpha2: rn\n* prefer\\_old: False\n* long\\_pair: eng-run\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-rnd
* source languages: en
* target languages: rnd
* OPUS readme: [en-rnd](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-rnd/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-rnd/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rnd/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rnd/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.rnd | 34.5 | 0.571 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-rnd | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"rnd",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #rnd #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-rnd
* source languages: en
* target languages: rnd
* OPUS readme: en-rnd
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 34.5, chr-F: 0.571
| [
"### opus-mt-en-rnd\n\n\n* source languages: en\n* target languages: rnd\n* OPUS readme: en-rnd\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.5, chr-F: 0.571"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #rnd #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-rnd\n\n\n* source languages: en\n* target languages: rnd\n* OPUS readme: en-rnd\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.5, chr-F: 0.571"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #rnd #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-rnd\n\n\n* source languages: en\n* target languages: rnd\n* OPUS readme: en-rnd\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.5, chr-F: 0.571"
] |
translation | transformers |
### opus-mt-en-ro
* source languages: en
* target languages: ro
* OPUS readme: [en-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ro/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ro/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ro/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ro/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-enro.en.ro | 30.8 | 0.592 |
| newstest2016-enro.en.ro | 28.8 | 0.571 |
| Tatoeba.en.ro | 45.3 | 0.670 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ro | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ro
* source languages: en
* target languages: ro
* OPUS readme: en-ro
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 30.8, chr-F: 0.592
testset: URL, BLEU: 28.8, chr-F: 0.571
testset: URL, BLEU: 45.3, chr-F: 0.670
| [
"### opus-mt-en-ro\n\n\n* source languages: en\n* target languages: ro\n* OPUS readme: en-ro\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.8, chr-F: 0.592\ntestset: URL, BLEU: 28.8, chr-F: 0.571\ntestset: URL, BLEU: 45.3, chr-F: 0.670"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ro\n\n\n* source languages: en\n* target languages: ro\n* OPUS readme: en-ro\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.8, chr-F: 0.592\ntestset: URL, BLEU: 28.8, chr-F: 0.571\ntestset: URL, BLEU: 45.3, chr-F: 0.670"
] | [
51,
151
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ro\n\n\n* source languages: en\n* target languages: ro\n* OPUS readme: en-ro\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.8, chr-F: 0.592\ntestset: URL, BLEU: 28.8, chr-F: 0.571\ntestset: URL, BLEU: 45.3, chr-F: 0.670"
] |
translation | transformers |
### eng-roa
* source group: English
* target group: Romance languages
* OPUS readme: [eng-roa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-roa/README.md)
* model: transformer
* source language(s): eng
* target language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-enro-engron.eng.ron | 27.6 | 0.567 |
| newsdiscussdev2015-enfr-engfra.eng.fra | 30.2 | 0.575 |
| newsdiscusstest2015-enfr-engfra.eng.fra | 35.5 | 0.612 |
| newssyscomb2009-engfra.eng.fra | 27.9 | 0.570 |
| newssyscomb2009-engita.eng.ita | 29.3 | 0.590 |
| newssyscomb2009-engspa.eng.spa | 29.6 | 0.570 |
| news-test2008-engfra.eng.fra | 25.2 | 0.538 |
| news-test2008-engspa.eng.spa | 27.3 | 0.548 |
| newstest2009-engfra.eng.fra | 26.9 | 0.560 |
| newstest2009-engita.eng.ita | 28.7 | 0.583 |
| newstest2009-engspa.eng.spa | 29.0 | 0.568 |
| newstest2010-engfra.eng.fra | 29.3 | 0.574 |
| newstest2010-engspa.eng.spa | 34.2 | 0.601 |
| newstest2011-engfra.eng.fra | 31.4 | 0.592 |
| newstest2011-engspa.eng.spa | 35.0 | 0.599 |
| newstest2012-engfra.eng.fra | 29.5 | 0.576 |
| newstest2012-engspa.eng.spa | 35.5 | 0.603 |
| newstest2013-engfra.eng.fra | 29.9 | 0.567 |
| newstest2013-engspa.eng.spa | 32.1 | 0.578 |
| newstest2016-enro-engron.eng.ron | 26.1 | 0.551 |
| Tatoeba-test.eng-arg.eng.arg | 1.4 | 0.125 |
| Tatoeba-test.eng-ast.eng.ast | 17.8 | 0.406 |
| Tatoeba-test.eng-cat.eng.cat | 48.3 | 0.676 |
| Tatoeba-test.eng-cos.eng.cos | 3.2 | 0.275 |
| Tatoeba-test.eng-egl.eng.egl | 0.2 | 0.084 |
| Tatoeba-test.eng-ext.eng.ext | 11.2 | 0.344 |
| Tatoeba-test.eng-fra.eng.fra | 45.3 | 0.637 |
| Tatoeba-test.eng-frm.eng.frm | 1.1 | 0.221 |
| Tatoeba-test.eng-gcf.eng.gcf | 0.6 | 0.118 |
| Tatoeba-test.eng-glg.eng.glg | 44.2 | 0.645 |
| Tatoeba-test.eng-hat.eng.hat | 28.0 | 0.502 |
| Tatoeba-test.eng-ita.eng.ita | 45.6 | 0.674 |
| Tatoeba-test.eng-lad.eng.lad | 8.2 | 0.322 |
| Tatoeba-test.eng-lij.eng.lij | 1.4 | 0.182 |
| Tatoeba-test.eng-lld.eng.lld | 0.8 | 0.217 |
| Tatoeba-test.eng-lmo.eng.lmo | 0.7 | 0.190 |
| Tatoeba-test.eng-mfe.eng.mfe | 91.9 | 0.956 |
| Tatoeba-test.eng-msa.eng.msa | 31.1 | 0.548 |
| Tatoeba-test.eng.multi | 42.9 | 0.636 |
| Tatoeba-test.eng-mwl.eng.mwl | 2.1 | 0.234 |
| Tatoeba-test.eng-oci.eng.oci | 7.9 | 0.297 |
| Tatoeba-test.eng-pap.eng.pap | 44.1 | 0.648 |
| Tatoeba-test.eng-pms.eng.pms | 2.1 | 0.190 |
| Tatoeba-test.eng-por.eng.por | 41.8 | 0.639 |
| Tatoeba-test.eng-roh.eng.roh | 3.5 | 0.261 |
| Tatoeba-test.eng-ron.eng.ron | 41.0 | 0.635 |
| Tatoeba-test.eng-scn.eng.scn | 1.7 | 0.184 |
| Tatoeba-test.eng-spa.eng.spa | 50.1 | 0.689 |
| Tatoeba-test.eng-vec.eng.vec | 3.2 | 0.248 |
| Tatoeba-test.eng-wln.eng.wln | 7.2 | 0.220 |
### System Info:
- hf_name: eng-roa
- source_languages: eng
- target_languages: roa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-roa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa']
- src_constituents: {'eng'}
- tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: roa
- short_pair: en-roa
- chrF2_score: 0.636
- bleu: 42.9
- brevity_penalty: 0.978
- ref_len: 72751.0
- src_name: English
- tgt_name: Romance languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: roa
- prefer_old: False
- long_pair: eng-roa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "it", "ca", "rm", "es", "ro", "gl", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "roa"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-roa | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"it",
"ca",
"rm",
"es",
"ro",
"gl",
"co",
"wa",
"pt",
"oc",
"an",
"id",
"fr",
"ht",
"roa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"it",
"ca",
"rm",
"es",
"ro",
"gl",
"co",
"wa",
"pt",
"oc",
"an",
"id",
"fr",
"ht",
"roa"
] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #it #ca #rm #es #ro #gl #co #wa #pt #oc #an #id #fr #ht #roa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-roa
* source group: English
* target group: Romance languages
* OPUS readme: eng-roa
* model: transformer
* source language(s): eng
* target language(s): arg ast cat cos egl ext fra frm\_Latn gcf\_Latn glg hat ind ita lad lad\_Latn lij lld\_Latn lmo max\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\_Latn vec wln zlm\_Latn zsm\_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.6, chr-F: 0.567
testset: URL, BLEU: 30.2, chr-F: 0.575
testset: URL, BLEU: 35.5, chr-F: 0.612
testset: URL, BLEU: 27.9, chr-F: 0.570
testset: URL, BLEU: 29.3, chr-F: 0.590
testset: URL, BLEU: 29.6, chr-F: 0.570
testset: URL, BLEU: 25.2, chr-F: 0.538
testset: URL, BLEU: 27.3, chr-F: 0.548
testset: URL, BLEU: 26.9, chr-F: 0.560
testset: URL, BLEU: 28.7, chr-F: 0.583
testset: URL, BLEU: 29.0, chr-F: 0.568
testset: URL, BLEU: 29.3, chr-F: 0.574
testset: URL, BLEU: 34.2, chr-F: 0.601
testset: URL, BLEU: 31.4, chr-F: 0.592
testset: URL, BLEU: 35.0, chr-F: 0.599
testset: URL, BLEU: 29.5, chr-F: 0.576
testset: URL, BLEU: 35.5, chr-F: 0.603
testset: URL, BLEU: 29.9, chr-F: 0.567
testset: URL, BLEU: 32.1, chr-F: 0.578
testset: URL, BLEU: 26.1, chr-F: 0.551
testset: URL, BLEU: 1.4, chr-F: 0.125
testset: URL, BLEU: 17.8, chr-F: 0.406
testset: URL, BLEU: 48.3, chr-F: 0.676
testset: URL, BLEU: 3.2, chr-F: 0.275
testset: URL, BLEU: 0.2, chr-F: 0.084
testset: URL, BLEU: 11.2, chr-F: 0.344
testset: URL, BLEU: 45.3, chr-F: 0.637
testset: URL, BLEU: 1.1, chr-F: 0.221
testset: URL, BLEU: 0.6, chr-F: 0.118
testset: URL, BLEU: 44.2, chr-F: 0.645
testset: URL, BLEU: 28.0, chr-F: 0.502
testset: URL, BLEU: 45.6, chr-F: 0.674
testset: URL, BLEU: 8.2, chr-F: 0.322
testset: URL, BLEU: 1.4, chr-F: 0.182
testset: URL, BLEU: 0.8, chr-F: 0.217
testset: URL, BLEU: 0.7, chr-F: 0.190
testset: URL, BLEU: 91.9, chr-F: 0.956
testset: URL, BLEU: 31.1, chr-F: 0.548
testset: URL, BLEU: 42.9, chr-F: 0.636
testset: URL, BLEU: 2.1, chr-F: 0.234
testset: URL, BLEU: 7.9, chr-F: 0.297
testset: URL, BLEU: 44.1, chr-F: 0.648
testset: URL, BLEU: 2.1, chr-F: 0.190
testset: URL, BLEU: 41.8, chr-F: 0.639
testset: URL, BLEU: 3.5, chr-F: 0.261
testset: URL, BLEU: 41.0, chr-F: 0.635
testset: URL, BLEU: 1.7, chr-F: 0.184
testset: URL, BLEU: 50.1, chr-F: 0.689
testset: URL, BLEU: 3.2, chr-F: 0.248
testset: URL, BLEU: 7.2, chr-F: 0.220
### System Info:
* hf\_name: eng-roa
* source\_languages: eng
* target\_languages: roa
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa']
* src\_constituents: {'eng'}
* tgt\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad\_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\_Latn', 'gcf\_Latn', 'lld\_Latn', 'min', 'tmw\_Latn', 'cos', 'wln', 'zlm\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\_Latn', 'frm\_Latn', 'scn', 'mfe'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: roa
* short\_pair: en-roa
* chrF2\_score: 0.636
* bleu: 42.9
* brevity\_penalty: 0.978
* ref\_len: 72751.0
* src\_name: English
* tgt\_name: Romance languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: roa
* prefer\_old: False
* long\_pair: eng-roa
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-roa\n\n\n* source group: English\n* target group: Romance languages\n* OPUS readme: eng-roa\n* model: transformer\n* source language(s): eng\n* target language(s): arg ast cat cos egl ext fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lij lld\\_Latn lmo max\\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\\_Latn vec wln zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.567\ntestset: URL, BLEU: 30.2, chr-F: 0.575\ntestset: URL, BLEU: 35.5, chr-F: 0.612\ntestset: URL, BLEU: 27.9, chr-F: 0.570\ntestset: URL, BLEU: 29.3, chr-F: 0.590\ntestset: URL, BLEU: 29.6, chr-F: 0.570\ntestset: URL, BLEU: 25.2, chr-F: 0.538\ntestset: URL, BLEU: 27.3, chr-F: 0.548\ntestset: URL, BLEU: 26.9, chr-F: 0.560\ntestset: URL, BLEU: 28.7, chr-F: 0.583\ntestset: URL, BLEU: 29.0, chr-F: 0.568\ntestset: URL, BLEU: 29.3, chr-F: 0.574\ntestset: URL, BLEU: 34.2, chr-F: 0.601\ntestset: URL, BLEU: 31.4, chr-F: 0.592\ntestset: URL, BLEU: 35.0, chr-F: 0.599\ntestset: URL, BLEU: 29.5, chr-F: 0.576\ntestset: URL, BLEU: 35.5, chr-F: 0.603\ntestset: URL, BLEU: 29.9, chr-F: 0.567\ntestset: URL, BLEU: 32.1, chr-F: 0.578\ntestset: URL, BLEU: 26.1, chr-F: 0.551\ntestset: URL, BLEU: 1.4, chr-F: 0.125\ntestset: URL, BLEU: 17.8, chr-F: 0.406\ntestset: URL, BLEU: 48.3, chr-F: 0.676\ntestset: URL, BLEU: 3.2, chr-F: 0.275\ntestset: URL, BLEU: 0.2, chr-F: 0.084\ntestset: URL, BLEU: 11.2, chr-F: 0.344\ntestset: URL, BLEU: 45.3, chr-F: 0.637\ntestset: URL, BLEU: 1.1, chr-F: 0.221\ntestset: URL, BLEU: 0.6, chr-F: 0.118\ntestset: URL, BLEU: 44.2, chr-F: 0.645\ntestset: URL, BLEU: 28.0, chr-F: 0.502\ntestset: URL, BLEU: 45.6, chr-F: 0.674\ntestset: URL, BLEU: 8.2, chr-F: 0.322\ntestset: URL, BLEU: 1.4, chr-F: 0.182\ntestset: URL, BLEU: 0.8, chr-F: 0.217\ntestset: URL, BLEU: 0.7, chr-F: 0.190\ntestset: URL, BLEU: 91.9, chr-F: 0.956\ntestset: URL, BLEU: 31.1, chr-F: 0.548\ntestset: URL, BLEU: 42.9, chr-F: 0.636\ntestset: URL, BLEU: 2.1, chr-F: 0.234\ntestset: URL, BLEU: 7.9, chr-F: 0.297\ntestset: URL, BLEU: 44.1, chr-F: 0.648\ntestset: URL, BLEU: 2.1, chr-F: 0.190\ntestset: URL, BLEU: 41.8, chr-F: 0.639\ntestset: URL, BLEU: 3.5, chr-F: 0.261\ntestset: URL, BLEU: 41.0, chr-F: 0.635\ntestset: URL, BLEU: 1.7, chr-F: 0.184\ntestset: URL, BLEU: 50.1, chr-F: 0.689\ntestset: URL, BLEU: 3.2, chr-F: 0.248\ntestset: URL, BLEU: 7.2, chr-F: 0.220",
"### System Info:\n\n\n* hf\\_name: eng-roa\n* source\\_languages: eng\n* target\\_languages: roa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad\\_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: roa\n* short\\_pair: en-roa\n* chrF2\\_score: 0.636\n* bleu: 42.9\n* brevity\\_penalty: 0.978\n* ref\\_len: 72751.0\n* src\\_name: English\n* tgt\\_name: Romance languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: roa\n* prefer\\_old: False\n* long\\_pair: eng-roa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #it #ca #rm #es #ro #gl #co #wa #pt #oc #an #id #fr #ht #roa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-roa\n\n\n* source group: English\n* target group: Romance languages\n* OPUS readme: eng-roa\n* model: transformer\n* source language(s): eng\n* target language(s): arg ast cat cos egl ext fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lij lld\\_Latn lmo max\\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\\_Latn vec wln zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.567\ntestset: URL, BLEU: 30.2, chr-F: 0.575\ntestset: URL, BLEU: 35.5, chr-F: 0.612\ntestset: URL, BLEU: 27.9, chr-F: 0.570\ntestset: URL, BLEU: 29.3, chr-F: 0.590\ntestset: URL, BLEU: 29.6, chr-F: 0.570\ntestset: URL, BLEU: 25.2, chr-F: 0.538\ntestset: URL, BLEU: 27.3, chr-F: 0.548\ntestset: URL, BLEU: 26.9, chr-F: 0.560\ntestset: URL, BLEU: 28.7, chr-F: 0.583\ntestset: URL, BLEU: 29.0, chr-F: 0.568\ntestset: URL, BLEU: 29.3, chr-F: 0.574\ntestset: URL, BLEU: 34.2, chr-F: 0.601\ntestset: URL, BLEU: 31.4, chr-F: 0.592\ntestset: URL, BLEU: 35.0, chr-F: 0.599\ntestset: URL, BLEU: 29.5, chr-F: 0.576\ntestset: URL, BLEU: 35.5, chr-F: 0.603\ntestset: URL, BLEU: 29.9, chr-F: 0.567\ntestset: URL, BLEU: 32.1, chr-F: 0.578\ntestset: URL, BLEU: 26.1, chr-F: 0.551\ntestset: URL, BLEU: 1.4, chr-F: 0.125\ntestset: URL, BLEU: 17.8, chr-F: 0.406\ntestset: URL, BLEU: 48.3, chr-F: 0.676\ntestset: URL, BLEU: 3.2, chr-F: 0.275\ntestset: URL, BLEU: 0.2, chr-F: 0.084\ntestset: URL, BLEU: 11.2, chr-F: 0.344\ntestset: URL, BLEU: 45.3, chr-F: 0.637\ntestset: URL, BLEU: 1.1, chr-F: 0.221\ntestset: URL, BLEU: 0.6, chr-F: 0.118\ntestset: URL, BLEU: 44.2, chr-F: 0.645\ntestset: URL, BLEU: 28.0, chr-F: 0.502\ntestset: URL, BLEU: 45.6, chr-F: 0.674\ntestset: URL, BLEU: 8.2, chr-F: 0.322\ntestset: URL, BLEU: 1.4, chr-F: 0.182\ntestset: URL, BLEU: 0.8, chr-F: 0.217\ntestset: URL, BLEU: 0.7, chr-F: 0.190\ntestset: URL, BLEU: 91.9, chr-F: 0.956\ntestset: URL, BLEU: 31.1, chr-F: 0.548\ntestset: URL, BLEU: 42.9, chr-F: 0.636\ntestset: URL, BLEU: 2.1, chr-F: 0.234\ntestset: URL, BLEU: 7.9, chr-F: 0.297\ntestset: URL, BLEU: 44.1, chr-F: 0.648\ntestset: URL, BLEU: 2.1, chr-F: 0.190\ntestset: URL, BLEU: 41.8, chr-F: 0.639\ntestset: URL, BLEU: 3.5, chr-F: 0.261\ntestset: URL, BLEU: 41.0, chr-F: 0.635\ntestset: URL, BLEU: 1.7, chr-F: 0.184\ntestset: URL, BLEU: 50.1, chr-F: 0.689\ntestset: URL, BLEU: 3.2, chr-F: 0.248\ntestset: URL, BLEU: 7.2, chr-F: 0.220",
"### System Info:\n\n\n* hf\\_name: eng-roa\n* source\\_languages: eng\n* target\\_languages: roa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad\\_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: roa\n* short\\_pair: en-roa\n* chrF2\\_score: 0.636\n* bleu: 42.9\n* brevity\\_penalty: 0.978\n* ref\\_len: 72751.0\n* src\\_name: English\n* tgt\\_name: Romance languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: roa\n* prefer\\_old: False\n* long\\_pair: eng-roa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
85,
1364,
661
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #it #ca #rm #es #ro #gl #co #wa #pt #oc #an #id #fr #ht #roa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-roa\n\n\n* source group: English\n* target group: Romance languages\n* OPUS readme: eng-roa\n* model: transformer\n* source language(s): eng\n* target language(s): arg ast cat cos egl ext fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lij lld\\_Latn lmo max\\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\\_Latn vec wln zlm\\_Latn zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.567\ntestset: URL, BLEU: 30.2, chr-F: 0.575\ntestset: URL, BLEU: 35.5, chr-F: 0.612\ntestset: URL, BLEU: 27.9, chr-F: 0.570\ntestset: URL, BLEU: 29.3, chr-F: 0.590\ntestset: URL, BLEU: 29.6, chr-F: 0.570\ntestset: URL, BLEU: 25.2, chr-F: 0.538\ntestset: URL, BLEU: 27.3, chr-F: 0.548\ntestset: URL, BLEU: 26.9, chr-F: 0.560\ntestset: URL, BLEU: 28.7, chr-F: 0.583\ntestset: URL, BLEU: 29.0, chr-F: 0.568\ntestset: URL, BLEU: 29.3, chr-F: 0.574\ntestset: URL, BLEU: 34.2, chr-F: 0.601\ntestset: URL, BLEU: 31.4, chr-F: 0.592\ntestset: URL, BLEU: 35.0, chr-F: 0.599\ntestset: URL, BLEU: 29.5, chr-F: 0.576\ntestset: URL, BLEU: 35.5, chr-F: 0.603\ntestset: URL, BLEU: 29.9, chr-F: 0.567\ntestset: URL, BLEU: 32.1, chr-F: 0.578\ntestset: URL, BLEU: 26.1, chr-F: 0.551\ntestset: URL, BLEU: 1.4, chr-F: 0.125\ntestset: URL, BLEU: 17.8, chr-F: 0.406\ntestset: URL, BLEU: 48.3, chr-F: 0.676\ntestset: URL, BLEU: 3.2, chr-F: 0.275\ntestset: URL, BLEU: 0.2, chr-F: 0.084\ntestset: URL, BLEU: 11.2, chr-F: 0.344\ntestset: URL, BLEU: 45.3, chr-F: 0.637\ntestset: URL, BLEU: 1.1, chr-F: 0.221\ntestset: URL, BLEU: 0.6, chr-F: 0.118\ntestset: URL, BLEU: 44.2, chr-F: 0.645\ntestset: URL, BLEU: 28.0, chr-F: 0.502\ntestset: URL, BLEU: 45.6, chr-F: 0.674\ntestset: URL, BLEU: 8.2, chr-F: 0.322\ntestset: URL, BLEU: 1.4, chr-F: 0.182\ntestset: URL, BLEU: 0.8, chr-F: 0.217\ntestset: URL, BLEU: 0.7, chr-F: 0.190\ntestset: URL, BLEU: 91.9, chr-F: 0.956\ntestset: URL, BLEU: 31.1, chr-F: 0.548\ntestset: URL, BLEU: 42.9, chr-F: 0.636\ntestset: URL, BLEU: 2.1, chr-F: 0.234\ntestset: URL, BLEU: 7.9, chr-F: 0.297\ntestset: URL, BLEU: 44.1, chr-F: 0.648\ntestset: URL, BLEU: 2.1, chr-F: 0.190\ntestset: URL, BLEU: 41.8, chr-F: 0.639\ntestset: URL, BLEU: 3.5, chr-F: 0.261\ntestset: URL, BLEU: 41.0, chr-F: 0.635\ntestset: URL, BLEU: 1.7, chr-F: 0.184\ntestset: URL, BLEU: 50.1, chr-F: 0.689\ntestset: URL, BLEU: 3.2, chr-F: 0.248\ntestset: URL, BLEU: 7.2, chr-F: 0.220### System Info:\n\n\n* hf\\_name: eng-roa\n* source\\_languages: eng\n* target\\_languages: roa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad\\_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: roa\n* short\\_pair: en-roa\n* chrF2\\_score: 0.636\n* bleu: 42.9\n* brevity\\_penalty: 0.978\n* ref\\_len: 72751.0\n* src\\_name: English\n* tgt\\_name: Romance languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: roa\n* prefer\\_old: False\n* long\\_pair: eng-roa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-ru
* source languages: en
* target languages: ru
* OPUS readme: [en-ru](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ru/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-11.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.zip)
* test set translations: [opus-2020-02-11.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.test.txt)
* test set scores: [opus-2020-02-11.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ru/opus-2020-02-11.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012.en.ru | 31.1 | 0.581 |
| newstest2013.en.ru | 23.5 | 0.513 |
| newstest2015-enru.en.ru | 27.5 | 0.564 |
| newstest2016-enru.en.ru | 26.4 | 0.548 |
| newstest2017-enru.en.ru | 29.1 | 0.572 |
| newstest2018-enru.en.ru | 25.4 | 0.554 |
| newstest2019-enru.en.ru | 27.1 | 0.533 |
| Tatoeba.en.ru | 48.4 | 0.669 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ru | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ru
* source languages: en
* target languages: ru
* OPUS readme: en-ru
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.1, chr-F: 0.581
testset: URL, BLEU: 23.5, chr-F: 0.513
testset: URL, BLEU: 27.5, chr-F: 0.564
testset: URL, BLEU: 26.4, chr-F: 0.548
testset: URL, BLEU: 29.1, chr-F: 0.572
testset: URL, BLEU: 25.4, chr-F: 0.554
testset: URL, BLEU: 27.1, chr-F: 0.533
testset: URL, BLEU: 48.4, chr-F: 0.669
| [
"### opus-mt-en-ru\n\n\n* source languages: en\n* target languages: ru\n* OPUS readme: en-ru\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.1, chr-F: 0.581\ntestset: URL, BLEU: 23.5, chr-F: 0.513\ntestset: URL, BLEU: 27.5, chr-F: 0.564\ntestset: URL, BLEU: 26.4, chr-F: 0.548\ntestset: URL, BLEU: 29.1, chr-F: 0.572\ntestset: URL, BLEU: 25.4, chr-F: 0.554\ntestset: URL, BLEU: 27.1, chr-F: 0.533\ntestset: URL, BLEU: 48.4, chr-F: 0.669"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ru\n\n\n* source languages: en\n* target languages: ru\n* OPUS readme: en-ru\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.1, chr-F: 0.581\ntestset: URL, BLEU: 23.5, chr-F: 0.513\ntestset: URL, BLEU: 27.5, chr-F: 0.564\ntestset: URL, BLEU: 26.4, chr-F: 0.548\ntestset: URL, BLEU: 29.1, chr-F: 0.572\ntestset: URL, BLEU: 25.4, chr-F: 0.554\ntestset: URL, BLEU: 27.1, chr-F: 0.533\ntestset: URL, BLEU: 48.4, chr-F: 0.669"
] | [
53,
267
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ru\n\n\n* source languages: en\n* target languages: ru\n* OPUS readme: en-ru\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.1, chr-F: 0.581\ntestset: URL, BLEU: 23.5, chr-F: 0.513\ntestset: URL, BLEU: 27.5, chr-F: 0.564\ntestset: URL, BLEU: 26.4, chr-F: 0.548\ntestset: URL, BLEU: 29.1, chr-F: 0.572\ntestset: URL, BLEU: 25.4, chr-F: 0.554\ntestset: URL, BLEU: 27.1, chr-F: 0.533\ntestset: URL, BLEU: 48.4, chr-F: 0.669"
] |
translation | transformers |
### opus-mt-en-run
* source languages: en
* target languages: run
* OPUS readme: [en-run](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-run/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-run/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.run | 34.2 | 0.591 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-run | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"run",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #run #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-run
* source languages: en
* target languages: run
* OPUS readme: en-run
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 34.2, chr-F: 0.591
| [
"### opus-mt-en-run\n\n\n* source languages: en\n* target languages: run\n* OPUS readme: en-run\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.2, chr-F: 0.591"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #run #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-run\n\n\n* source languages: en\n* target languages: run\n* OPUS readme: en-run\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.2, chr-F: 0.591"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #run #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-run\n\n\n* source languages: en\n* target languages: run\n* OPUS readme: en-run\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.2, chr-F: 0.591"
] |
translation | transformers |
### opus-mt-en-rw
* source languages: en
* target languages: rw
* OPUS readme: [en-rw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-rw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-rw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-rw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.rw | 33.3 | 0.569 |
| Tatoeba.en.rw | 13.8 | 0.503 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-rw | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"rw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #rw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-rw
* source languages: en
* target languages: rw
* OPUS readme: en-rw
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.3, chr-F: 0.569
testset: URL, BLEU: 13.8, chr-F: 0.503
| [
"### opus-mt-en-rw\n\n\n* source languages: en\n* target languages: rw\n* OPUS readme: en-rw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.3, chr-F: 0.569\ntestset: URL, BLEU: 13.8, chr-F: 0.503"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #rw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-rw\n\n\n* source languages: en\n* target languages: rw\n* OPUS readme: en-rw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.3, chr-F: 0.569\ntestset: URL, BLEU: 13.8, chr-F: 0.503"
] | [
52,
132
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #rw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-rw\n\n\n* source languages: en\n* target languages: rw\n* OPUS readme: en-rw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.3, chr-F: 0.569\ntestset: URL, BLEU: 13.8, chr-F: 0.503"
] |
translation | transformers |
### eng-sal
* source group: English
* target group: Salishan languages
* OPUS readme: [eng-sal](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sal/README.md)
* model: transformer
* source language(s): eng
* target language(s): shs_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-14.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.zip)
* test set translations: [opus-2020-07-14.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.test.txt)
* test set scores: [opus-2020-07-14.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.multi | 32.6 | 0.585 |
| Tatoeba-test.eng.shs | 1.1 | 0.072 |
| Tatoeba-test.eng-shs.eng.shs | 1.2 | 0.065 |
### System Info:
- hf_name: eng-sal
- source_languages: eng
- target_languages: sal
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sal/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'sal']
- src_constituents: {'eng'}
- tgt_constituents: {'shs_Latn'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sal/opus-2020-07-14.test.txt
- src_alpha3: eng
- tgt_alpha3: sal
- short_pair: en-sal
- chrF2_score: 0.07200000000000001
- bleu: 1.1
- brevity_penalty: 1.0
- ref_len: 199.0
- src_name: English
- tgt_name: Salishan languages
- train_date: 2020-07-14
- src_alpha2: en
- tgt_alpha2: sal
- prefer_old: False
- long_pair: eng-sal
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "sal"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-sal | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sal",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"sal"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #sal #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-sal
* source group: English
* target group: Salishan languages
* OPUS readme: eng-sal
* model: transformer
* source language(s): eng
* target language(s): shs\_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.6, chr-F: 0.585
testset: URL, BLEU: 1.1, chr-F: 0.072
testset: URL, BLEU: 1.2, chr-F: 0.065
### System Info:
* hf\_name: eng-sal
* source\_languages: eng
* target\_languages: sal
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'sal']
* src\_constituents: {'eng'}
* tgt\_constituents: {'shs\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: sal
* short\_pair: en-sal
* chrF2\_score: 0.07200000000000001
* bleu: 1.1
* brevity\_penalty: 1.0
* ref\_len: 199.0
* src\_name: English
* tgt\_name: Salishan languages
* train\_date: 2020-07-14
* src\_alpha2: en
* tgt\_alpha2: sal
* prefer\_old: False
* long\_pair: eng-sal
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-sal\n\n\n* source group: English\n* target group: Salishan languages\n* OPUS readme: eng-sal\n* model: transformer\n* source language(s): eng\n* target language(s): shs\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.585\ntestset: URL, BLEU: 1.1, chr-F: 0.072\ntestset: URL, BLEU: 1.2, chr-F: 0.065",
"### System Info:\n\n\n* hf\\_name: eng-sal\n* source\\_languages: eng\n* target\\_languages: sal\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sal']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'shs\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: sal\n* short\\_pair: en-sal\n* chrF2\\_score: 0.07200000000000001\n* bleu: 1.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 199.0\n* src\\_name: English\n* tgt\\_name: Salishan languages\n* train\\_date: 2020-07-14\n* src\\_alpha2: en\n* tgt\\_alpha2: sal\n* prefer\\_old: False\n* long\\_pair: eng-sal\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sal #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-sal\n\n\n* source group: English\n* target group: Salishan languages\n* OPUS readme: eng-sal\n* model: transformer\n* source language(s): eng\n* target language(s): shs\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.585\ntestset: URL, BLEU: 1.1, chr-F: 0.072\ntestset: URL, BLEU: 1.2, chr-F: 0.065",
"### System Info:\n\n\n* hf\\_name: eng-sal\n* source\\_languages: eng\n* target\\_languages: sal\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sal']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'shs\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: sal\n* short\\_pair: en-sal\n* chrF2\\_score: 0.07200000000000001\n* bleu: 1.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 199.0\n* src\\_name: English\n* tgt\\_name: Salishan languages\n* train\\_date: 2020-07-14\n* src\\_alpha2: en\n* tgt\\_alpha2: sal\n* prefer\\_old: False\n* long\\_pair: eng-sal\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
182,
404
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sal #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-sal\n\n\n* source group: English\n* target group: Salishan languages\n* OPUS readme: eng-sal\n* model: transformer\n* source language(s): eng\n* target language(s): shs\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.585\ntestset: URL, BLEU: 1.1, chr-F: 0.072\ntestset: URL, BLEU: 1.2, chr-F: 0.065### System Info:\n\n\n* hf\\_name: eng-sal\n* source\\_languages: eng\n* target\\_languages: sal\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sal']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'shs\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: sal\n* short\\_pair: en-sal\n* chrF2\\_score: 0.07200000000000001\n* bleu: 1.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 199.0\n* src\\_name: English\n* tgt\\_name: Salishan languages\n* train\\_date: 2020-07-14\n* src\\_alpha2: en\n* tgt\\_alpha2: sal\n* prefer\\_old: False\n* long\\_pair: eng-sal\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-sem
* source group: English
* target group: Semitic languages
* OPUS readme: [eng-sem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sem/README.md)
* model: transformer
* source language(s): eng
* target language(s): acm afb amh apc ara arq ary arz heb mlt tir
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-amh.eng.amh | 11.2 | 0.480 |
| Tatoeba-test.eng-ara.eng.ara | 12.7 | 0.417 |
| Tatoeba-test.eng-heb.eng.heb | 33.8 | 0.564 |
| Tatoeba-test.eng-mlt.eng.mlt | 18.7 | 0.554 |
| Tatoeba-test.eng.multi | 23.5 | 0.486 |
| Tatoeba-test.eng-tir.eng.tir | 2.7 | 0.248 |
### System Info:
- hf_name: eng-sem
- source_languages: eng
- target_languages: sem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'mt', 'ar', 'he', 'ti', 'am', 'sem']
- src_constituents: {'eng'}
- tgt_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sem/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: sem
- short_pair: en-sem
- chrF2_score: 0.486
- bleu: 23.5
- brevity_penalty: 1.0
- ref_len: 59258.0
- src_name: English
- tgt_name: Semitic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: sem
- prefer_old: False
- long_pair: eng-sem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "mt", "ar", "he", "ti", "am", "sem"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-sem | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"mt",
"ar",
"he",
"ti",
"am",
"sem",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"mt",
"ar",
"he",
"ti",
"am",
"sem"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #mt #ar #he #ti #am #sem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-sem
* source group: English
* target group: Semitic languages
* OPUS readme: eng-sem
* model: transformer
* source language(s): eng
* target language(s): acm afb amh apc ara arq ary arz heb mlt tir
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 11.2, chr-F: 0.480
testset: URL, BLEU: 12.7, chr-F: 0.417
testset: URL, BLEU: 33.8, chr-F: 0.564
testset: URL, BLEU: 18.7, chr-F: 0.554
testset: URL, BLEU: 23.5, chr-F: 0.486
testset: URL, BLEU: 2.7, chr-F: 0.248
### System Info:
* hf\_name: eng-sem
* source\_languages: eng
* target\_languages: sem
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'mt', 'ar', 'he', 'ti', 'am', 'sem']
* src\_constituents: {'eng'}
* tgt\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: sem
* short\_pair: en-sem
* chrF2\_score: 0.486
* bleu: 23.5
* brevity\_penalty: 1.0
* ref\_len: 59258.0
* src\_name: English
* tgt\_name: Semitic languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: sem
* prefer\_old: False
* long\_pair: eng-sem
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-sem\n\n\n* source group: English\n* target group: Semitic languages\n* OPUS readme: eng-sem\n* model: transformer\n* source language(s): eng\n* target language(s): acm afb amh apc ara arq ary arz heb mlt tir\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.2, chr-F: 0.480\ntestset: URL, BLEU: 12.7, chr-F: 0.417\ntestset: URL, BLEU: 33.8, chr-F: 0.564\ntestset: URL, BLEU: 18.7, chr-F: 0.554\ntestset: URL, BLEU: 23.5, chr-F: 0.486\ntestset: URL, BLEU: 2.7, chr-F: 0.248",
"### System Info:\n\n\n* hf\\_name: eng-sem\n* source\\_languages: eng\n* target\\_languages: sem\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'mt', 'ar', 'he', 'ti', 'am', 'sem']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: sem\n* short\\_pair: en-sem\n* chrF2\\_score: 0.486\n* bleu: 23.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 59258.0\n* src\\_name: English\n* tgt\\_name: Semitic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: sem\n* prefer\\_old: False\n* long\\_pair: eng-sem\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mt #ar #he #ti #am #sem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-sem\n\n\n* source group: English\n* target group: Semitic languages\n* OPUS readme: eng-sem\n* model: transformer\n* source language(s): eng\n* target language(s): acm afb amh apc ara arq ary arz heb mlt tir\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.2, chr-F: 0.480\ntestset: URL, BLEU: 12.7, chr-F: 0.417\ntestset: URL, BLEU: 33.8, chr-F: 0.564\ntestset: URL, BLEU: 18.7, chr-F: 0.554\ntestset: URL, BLEU: 23.5, chr-F: 0.486\ntestset: URL, BLEU: 2.7, chr-F: 0.248",
"### System Info:\n\n\n* hf\\_name: eng-sem\n* source\\_languages: eng\n* target\\_languages: sem\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'mt', 'ar', 'he', 'ti', 'am', 'sem']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: sem\n* short\\_pair: en-sem\n* chrF2\\_score: 0.486\n* bleu: 23.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 59258.0\n* src\\_name: English\n* tgt\\_name: Semitic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: sem\n* prefer\\_old: False\n* long\\_pair: eng-sem\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
62,
288,
468
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #mt #ar #he #ti #am #sem #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-sem\n\n\n* source group: English\n* target group: Semitic languages\n* OPUS readme: eng-sem\n* model: transformer\n* source language(s): eng\n* target language(s): acm afb amh apc ara arq ary arz heb mlt tir\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.2, chr-F: 0.480\ntestset: URL, BLEU: 12.7, chr-F: 0.417\ntestset: URL, BLEU: 33.8, chr-F: 0.564\ntestset: URL, BLEU: 18.7, chr-F: 0.554\ntestset: URL, BLEU: 23.5, chr-F: 0.486\ntestset: URL, BLEU: 2.7, chr-F: 0.248### System Info:\n\n\n* hf\\_name: eng-sem\n* source\\_languages: eng\n* target\\_languages: sem\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'mt', 'ar', 'he', 'ti', 'am', 'sem']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: sem\n* short\\_pair: en-sem\n* chrF2\\_score: 0.486\n* bleu: 23.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 59258.0\n* src\\_name: English\n* tgt\\_name: Semitic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: sem\n* prefer\\_old: False\n* long\\_pair: eng-sem\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-sg
* source languages: en
* target languages: sg
* OPUS readme: [en-sg](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sg/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sg/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sg/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sg/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.sg | 37.0 | 0.544 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-sg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #sg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-sg
* source languages: en
* target languages: sg
* OPUS readme: en-sg
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.0, chr-F: 0.544
| [
"### opus-mt-en-sg\n\n\n* source languages: en\n* target languages: sg\n* OPUS readme: en-sg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.0, chr-F: 0.544"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-sg\n\n\n* source languages: en\n* target languages: sg\n* OPUS readme: en-sg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.0, chr-F: 0.544"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-sg\n\n\n* source languages: en\n* target languages: sg\n* OPUS readme: en-sg\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.0, chr-F: 0.544"
] |
translation | transformers |
### eng-sit
* source group: English
* target group: Sino-Tibetan languages
* OPUS readme: [eng-sit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sit/README.md)
* model: transformer
* source language(s): eng
* target language(s): bod brx brx_Latn cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans mya nan wuu yue yue_Hans yue_Hant zho zho_Hans zho_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2017-enzh-engzho.eng.zho | 23.5 | 0.217 |
| newstest2017-enzh-engzho.eng.zho | 23.2 | 0.223 |
| newstest2018-enzh-engzho.eng.zho | 25.0 | 0.230 |
| newstest2019-enzh-engzho.eng.zho | 20.2 | 0.225 |
| Tatoeba-test.eng-bod.eng.bod | 0.4 | 0.147 |
| Tatoeba-test.eng-brx.eng.brx | 0.5 | 0.012 |
| Tatoeba-test.eng.multi | 25.7 | 0.223 |
| Tatoeba-test.eng-mya.eng.mya | 0.2 | 0.222 |
| Tatoeba-test.eng-zho.eng.zho | 29.2 | 0.249 |
### System Info:
- hf_name: eng-sit
- source_languages: eng
- target_languages: sit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'sit']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sit/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: sit
- short_pair: en-sit
- chrF2_score: 0.223
- bleu: 25.7
- brevity_penalty: 0.907
- ref_len: 109538.0
- src_name: English
- tgt_name: Sino-Tibetan languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: sit
- prefer_old: False
- long_pair: eng-sit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "sit"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-sit | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"sit"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #sit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-sit
* source group: English
* target group: Sino-Tibetan languages
* OPUS readme: eng-sit
* model: transformer
* source language(s): eng
* target language(s): bod brx brx\_Latn cjy\_Hans cjy\_Hant cmn cmn\_Hans cmn\_Hant gan lzh lzh\_Hans mya nan wuu yue yue\_Hans yue\_Hant zho zho\_Hans zho\_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.5, chr-F: 0.217
testset: URL, BLEU: 23.2, chr-F: 0.223
testset: URL, BLEU: 25.0, chr-F: 0.230
testset: URL, BLEU: 20.2, chr-F: 0.225
testset: URL, BLEU: 0.4, chr-F: 0.147
testset: URL, BLEU: 0.5, chr-F: 0.012
testset: URL, BLEU: 25.7, chr-F: 0.223
testset: URL, BLEU: 0.2, chr-F: 0.222
testset: URL, BLEU: 29.2, chr-F: 0.249
### System Info:
* hf\_name: eng-sit
* source\_languages: eng
* target\_languages: sit
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'sit']
* src\_constituents: {'eng'}
* tgt\_constituents: set()
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: sit
* short\_pair: en-sit
* chrF2\_score: 0.223
* bleu: 25.7
* brevity\_penalty: 0.907
* ref\_len: 109538.0
* src\_name: English
* tgt\_name: Sino-Tibetan languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: sit
* prefer\_old: False
* long\_pair: eng-sit
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-sit\n\n\n* source group: English\n* target group: Sino-Tibetan languages\n* OPUS readme: eng-sit\n* model: transformer\n* source language(s): eng\n* target language(s): bod brx brx\\_Latn cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant gan lzh lzh\\_Hans mya nan wuu yue yue\\_Hans yue\\_Hant zho zho\\_Hans zho\\_Hant\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.217\ntestset: URL, BLEU: 23.2, chr-F: 0.223\ntestset: URL, BLEU: 25.0, chr-F: 0.230\ntestset: URL, BLEU: 20.2, chr-F: 0.225\ntestset: URL, BLEU: 0.4, chr-F: 0.147\ntestset: URL, BLEU: 0.5, chr-F: 0.012\ntestset: URL, BLEU: 25.7, chr-F: 0.223\ntestset: URL, BLEU: 0.2, chr-F: 0.222\ntestset: URL, BLEU: 29.2, chr-F: 0.249",
"### System Info:\n\n\n* hf\\_name: eng-sit\n* source\\_languages: eng\n* target\\_languages: sit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sit']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: sit\n* short\\_pair: en-sit\n* chrF2\\_score: 0.223\n* bleu: 25.7\n* brevity\\_penalty: 0.907\n* ref\\_len: 109538.0\n* src\\_name: English\n* tgt\\_name: Sino-Tibetan languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: sit\n* prefer\\_old: False\n* long\\_pair: eng-sit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-sit\n\n\n* source group: English\n* target group: Sino-Tibetan languages\n* OPUS readme: eng-sit\n* model: transformer\n* source language(s): eng\n* target language(s): bod brx brx\\_Latn cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant gan lzh lzh\\_Hans mya nan wuu yue yue\\_Hans yue\\_Hant zho zho\\_Hans zho\\_Hant\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.217\ntestset: URL, BLEU: 23.2, chr-F: 0.223\ntestset: URL, BLEU: 25.0, chr-F: 0.230\ntestset: URL, BLEU: 20.2, chr-F: 0.225\ntestset: URL, BLEU: 0.4, chr-F: 0.147\ntestset: URL, BLEU: 0.5, chr-F: 0.012\ntestset: URL, BLEU: 25.7, chr-F: 0.223\ntestset: URL, BLEU: 0.2, chr-F: 0.222\ntestset: URL, BLEU: 29.2, chr-F: 0.249",
"### System Info:\n\n\n* hf\\_name: eng-sit\n* source\\_languages: eng\n* target\\_languages: sit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sit']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: sit\n* short\\_pair: en-sit\n* chrF2\\_score: 0.223\n* bleu: 25.7\n* brevity\\_penalty: 0.907\n* ref\\_len: 109538.0\n* src\\_name: English\n* tgt\\_name: Sino-Tibetan languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: sit\n* prefer\\_old: False\n* long\\_pair: eng-sit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
405,
392
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-sit\n\n\n* source group: English\n* target group: Sino-Tibetan languages\n* OPUS readme: eng-sit\n* model: transformer\n* source language(s): eng\n* target language(s): bod brx brx\\_Latn cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant gan lzh lzh\\_Hans mya nan wuu yue yue\\_Hans yue\\_Hant zho zho\\_Hans zho\\_Hant\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.217\ntestset: URL, BLEU: 23.2, chr-F: 0.223\ntestset: URL, BLEU: 25.0, chr-F: 0.230\ntestset: URL, BLEU: 20.2, chr-F: 0.225\ntestset: URL, BLEU: 0.4, chr-F: 0.147\ntestset: URL, BLEU: 0.5, chr-F: 0.012\ntestset: URL, BLEU: 25.7, chr-F: 0.223\ntestset: URL, BLEU: 0.2, chr-F: 0.222\ntestset: URL, BLEU: 29.2, chr-F: 0.249### System Info:\n\n\n* hf\\_name: eng-sit\n* source\\_languages: eng\n* target\\_languages: sit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'sit']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: sit\n* short\\_pair: en-sit\n* chrF2\\_score: 0.223\n* bleu: 25.7\n* brevity\\_penalty: 0.907\n* ref\\_len: 109538.0\n* src\\_name: English\n* tgt\\_name: Sino-Tibetan languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: sit\n* prefer\\_old: False\n* long\\_pair: eng-sit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-sk
* source languages: en
* target languages: sk
* OPUS readme: [en-sk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.sk | 36.8 | 0.578 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-sk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #sk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-sk
* source languages: en
* target languages: sk
* OPUS readme: en-sk
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 36.8, chr-F: 0.578
| [
"### opus-mt-en-sk\n\n\n* source languages: en\n* target languages: sk\n* OPUS readme: en-sk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.8, chr-F: 0.578"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-sk\n\n\n* source languages: en\n* target languages: sk\n* OPUS readme: en-sk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.8, chr-F: 0.578"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-sk\n\n\n* source languages: en\n* target languages: sk\n* OPUS readme: en-sk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.8, chr-F: 0.578"
] |
translation | transformers |
### eng-sla
* source group: English
* target group: Slavic languages
* OPUS readme: [eng-sla](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sla/README.md)
* model: transformer
* source language(s): eng
* target language(s): bel bel_Latn bos_Latn bul bul_Latn ces csb_Latn dsb hrv hsb mkd orv_Cyrl pol rue rus slv srp_Cyrl srp_Latn ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sla/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sla/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sla/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engces.eng.ces | 20.1 | 0.484 |
| news-test2008-engces.eng.ces | 17.7 | 0.461 |
| newstest2009-engces.eng.ces | 19.1 | 0.479 |
| newstest2010-engces.eng.ces | 19.3 | 0.483 |
| newstest2011-engces.eng.ces | 20.4 | 0.486 |
| newstest2012-engces.eng.ces | 18.3 | 0.461 |
| newstest2012-engrus.eng.rus | 27.4 | 0.551 |
| newstest2013-engces.eng.ces | 21.5 | 0.489 |
| newstest2013-engrus.eng.rus | 20.9 | 0.490 |
| newstest2015-encs-engces.eng.ces | 21.1 | 0.496 |
| newstest2015-enru-engrus.eng.rus | 24.5 | 0.536 |
| newstest2016-encs-engces.eng.ces | 23.6 | 0.515 |
| newstest2016-enru-engrus.eng.rus | 23.0 | 0.519 |
| newstest2017-encs-engces.eng.ces | 19.2 | 0.474 |
| newstest2017-enru-engrus.eng.rus | 25.0 | 0.541 |
| newstest2018-encs-engces.eng.ces | 19.3 | 0.479 |
| newstest2018-enru-engrus.eng.rus | 22.3 | 0.526 |
| newstest2019-encs-engces.eng.ces | 20.4 | 0.486 |
| newstest2019-enru-engrus.eng.rus | 24.0 | 0.506 |
| Tatoeba-test.eng-bel.eng.bel | 22.9 | 0.489 |
| Tatoeba-test.eng-bul.eng.bul | 46.7 | 0.652 |
| Tatoeba-test.eng-ces.eng.ces | 42.7 | 0.624 |
| Tatoeba-test.eng-csb.eng.csb | 1.4 | 0.210 |
| Tatoeba-test.eng-dsb.eng.dsb | 1.4 | 0.165 |
| Tatoeba-test.eng-hbs.eng.hbs | 40.3 | 0.616 |
| Tatoeba-test.eng-hsb.eng.hsb | 14.3 | 0.344 |
| Tatoeba-test.eng-mkd.eng.mkd | 44.1 | 0.635 |
| Tatoeba-test.eng.multi | 41.0 | 0.610 |
| Tatoeba-test.eng-orv.eng.orv | 0.3 | 0.014 |
| Tatoeba-test.eng-pol.eng.pol | 42.0 | 0.637 |
| Tatoeba-test.eng-rue.eng.rue | 0.3 | 0.012 |
| Tatoeba-test.eng-rus.eng.rus | 40.5 | 0.612 |
| Tatoeba-test.eng-slv.eng.slv | 18.8 | 0.357 |
| Tatoeba-test.eng-ukr.eng.ukr | 38.8 | 0.600 |
### System Info:
- hf_name: eng-sla
- source_languages: eng
- target_languages: sla
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-sla/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']
- src_constituents: {'eng'}
- tgt_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sla/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-sla/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: sla
- short_pair: en-sla
- chrF2_score: 0.61
- bleu: 41.0
- brevity_penalty: 0.976
- ref_len: 64809.0
- src_name: English
- tgt_name: Slavic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: sla
- prefer_old: False
- long_pair: eng-sla
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "be", "hr", "mk", "cs", "ru", "pl", "bg", "uk", "sl", "sla"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-sla | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"be",
"hr",
"mk",
"cs",
"ru",
"pl",
"bg",
"uk",
"sl",
"sla",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"be",
"hr",
"mk",
"cs",
"ru",
"pl",
"bg",
"uk",
"sl",
"sla"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #be #hr #mk #cs #ru #pl #bg #uk #sl #sla #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-sla
* source group: English
* target group: Slavic languages
* OPUS readme: eng-sla
* model: transformer
* source language(s): eng
* target language(s): bel bel\_Latn bos\_Latn bul bul\_Latn ces csb\_Latn dsb hrv hsb mkd orv\_Cyrl pol rue rus slv srp\_Cyrl srp\_Latn ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 20.1, chr-F: 0.484
testset: URL, BLEU: 17.7, chr-F: 0.461
testset: URL, BLEU: 19.1, chr-F: 0.479
testset: URL, BLEU: 19.3, chr-F: 0.483
testset: URL, BLEU: 20.4, chr-F: 0.486
testset: URL, BLEU: 18.3, chr-F: 0.461
testset: URL, BLEU: 27.4, chr-F: 0.551
testset: URL, BLEU: 21.5, chr-F: 0.489
testset: URL, BLEU: 20.9, chr-F: 0.490
testset: URL, BLEU: 21.1, chr-F: 0.496
testset: URL, BLEU: 24.5, chr-F: 0.536
testset: URL, BLEU: 23.6, chr-F: 0.515
testset: URL, BLEU: 23.0, chr-F: 0.519
testset: URL, BLEU: 19.2, chr-F: 0.474
testset: URL, BLEU: 25.0, chr-F: 0.541
testset: URL, BLEU: 19.3, chr-F: 0.479
testset: URL, BLEU: 22.3, chr-F: 0.526
testset: URL, BLEU: 20.4, chr-F: 0.486
testset: URL, BLEU: 24.0, chr-F: 0.506
testset: URL, BLEU: 22.9, chr-F: 0.489
testset: URL, BLEU: 46.7, chr-F: 0.652
testset: URL, BLEU: 42.7, chr-F: 0.624
testset: URL, BLEU: 1.4, chr-F: 0.210
testset: URL, BLEU: 1.4, chr-F: 0.165
testset: URL, BLEU: 40.3, chr-F: 0.616
testset: URL, BLEU: 14.3, chr-F: 0.344
testset: URL, BLEU: 44.1, chr-F: 0.635
testset: URL, BLEU: 41.0, chr-F: 0.610
testset: URL, BLEU: 0.3, chr-F: 0.014
testset: URL, BLEU: 42.0, chr-F: 0.637
testset: URL, BLEU: 0.3, chr-F: 0.012
testset: URL, BLEU: 40.5, chr-F: 0.612
testset: URL, BLEU: 18.8, chr-F: 0.357
testset: URL, BLEU: 38.8, chr-F: 0.600
### System Info:
* hf\_name: eng-sla
* source\_languages: eng
* target\_languages: sla
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']
* src\_constituents: {'eng'}
* tgt\_constituents: {'bel', 'hrv', 'orv\_Cyrl', 'mkd', 'bel\_Latn', 'srp\_Latn', 'bul\_Latn', 'ces', 'bos\_Latn', 'csb\_Latn', 'dsb', 'hsb', 'rus', 'srp\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: sla
* short\_pair: en-sla
* chrF2\_score: 0.61
* bleu: 41.0
* brevity\_penalty: 0.976
* ref\_len: 64809.0
* src\_name: English
* tgt\_name: Slavic languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: sla
* prefer\_old: False
* long\_pair: eng-sla
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-sla\n\n\n* source group: English\n* target group: Slavic languages\n* OPUS readme: eng-sla\n* model: transformer\n* source language(s): eng\n* target language(s): bel bel\\_Latn bos\\_Latn bul bul\\_Latn ces csb\\_Latn dsb hrv hsb mkd orv\\_Cyrl pol rue rus slv srp\\_Cyrl srp\\_Latn ukr\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.1, chr-F: 0.484\ntestset: URL, BLEU: 17.7, chr-F: 0.461\ntestset: URL, BLEU: 19.1, chr-F: 0.479\ntestset: URL, BLEU: 19.3, chr-F: 0.483\ntestset: URL, BLEU: 20.4, chr-F: 0.486\ntestset: URL, BLEU: 18.3, chr-F: 0.461\ntestset: URL, BLEU: 27.4, chr-F: 0.551\ntestset: URL, BLEU: 21.5, chr-F: 0.489\ntestset: URL, BLEU: 20.9, chr-F: 0.490\ntestset: URL, BLEU: 21.1, chr-F: 0.496\ntestset: URL, BLEU: 24.5, chr-F: 0.536\ntestset: URL, BLEU: 23.6, chr-F: 0.515\ntestset: URL, BLEU: 23.0, chr-F: 0.519\ntestset: URL, BLEU: 19.2, chr-F: 0.474\ntestset: URL, BLEU: 25.0, chr-F: 0.541\ntestset: URL, BLEU: 19.3, chr-F: 0.479\ntestset: URL, BLEU: 22.3, chr-F: 0.526\ntestset: URL, BLEU: 20.4, chr-F: 0.486\ntestset: URL, BLEU: 24.0, chr-F: 0.506\ntestset: URL, BLEU: 22.9, chr-F: 0.489\ntestset: URL, BLEU: 46.7, chr-F: 0.652\ntestset: URL, BLEU: 42.7, chr-F: 0.624\ntestset: URL, BLEU: 1.4, chr-F: 0.210\ntestset: URL, BLEU: 1.4, chr-F: 0.165\ntestset: URL, BLEU: 40.3, chr-F: 0.616\ntestset: URL, BLEU: 14.3, chr-F: 0.344\ntestset: URL, BLEU: 44.1, chr-F: 0.635\ntestset: URL, BLEU: 41.0, chr-F: 0.610\ntestset: URL, BLEU: 0.3, chr-F: 0.014\ntestset: URL, BLEU: 42.0, chr-F: 0.637\ntestset: URL, BLEU: 0.3, chr-F: 0.012\ntestset: URL, BLEU: 40.5, chr-F: 0.612\ntestset: URL, BLEU: 18.8, chr-F: 0.357\ntestset: URL, BLEU: 38.8, chr-F: 0.600",
"### System Info:\n\n\n* hf\\_name: eng-sla\n* source\\_languages: eng\n* target\\_languages: sla\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'bel', 'hrv', 'orv\\_Cyrl', 'mkd', 'bel\\_Latn', 'srp\\_Latn', 'bul\\_Latn', 'ces', 'bos\\_Latn', 'csb\\_Latn', 'dsb', 'hsb', 'rus', 'srp\\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: sla\n* short\\_pair: en-sla\n* chrF2\\_score: 0.61\n* bleu: 41.0\n* brevity\\_penalty: 0.976\n* ref\\_len: 64809.0\n* src\\_name: English\n* tgt\\_name: Slavic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: sla\n* prefer\\_old: False\n* long\\_pair: eng-sla\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #be #hr #mk #cs #ru #pl #bg #uk #sl #sla #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-sla\n\n\n* source group: English\n* target group: Slavic languages\n* OPUS readme: eng-sla\n* model: transformer\n* source language(s): eng\n* target language(s): bel bel\\_Latn bos\\_Latn bul bul\\_Latn ces csb\\_Latn dsb hrv hsb mkd orv\\_Cyrl pol rue rus slv srp\\_Cyrl srp\\_Latn ukr\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.1, chr-F: 0.484\ntestset: URL, BLEU: 17.7, chr-F: 0.461\ntestset: URL, BLEU: 19.1, chr-F: 0.479\ntestset: URL, BLEU: 19.3, chr-F: 0.483\ntestset: URL, BLEU: 20.4, chr-F: 0.486\ntestset: URL, BLEU: 18.3, chr-F: 0.461\ntestset: URL, BLEU: 27.4, chr-F: 0.551\ntestset: URL, BLEU: 21.5, chr-F: 0.489\ntestset: URL, BLEU: 20.9, chr-F: 0.490\ntestset: URL, BLEU: 21.1, chr-F: 0.496\ntestset: URL, BLEU: 24.5, chr-F: 0.536\ntestset: URL, BLEU: 23.6, chr-F: 0.515\ntestset: URL, BLEU: 23.0, chr-F: 0.519\ntestset: URL, BLEU: 19.2, chr-F: 0.474\ntestset: URL, BLEU: 25.0, chr-F: 0.541\ntestset: URL, BLEU: 19.3, chr-F: 0.479\ntestset: URL, BLEU: 22.3, chr-F: 0.526\ntestset: URL, BLEU: 20.4, chr-F: 0.486\ntestset: URL, BLEU: 24.0, chr-F: 0.506\ntestset: URL, BLEU: 22.9, chr-F: 0.489\ntestset: URL, BLEU: 46.7, chr-F: 0.652\ntestset: URL, BLEU: 42.7, chr-F: 0.624\ntestset: URL, BLEU: 1.4, chr-F: 0.210\ntestset: URL, BLEU: 1.4, chr-F: 0.165\ntestset: URL, BLEU: 40.3, chr-F: 0.616\ntestset: URL, BLEU: 14.3, chr-F: 0.344\ntestset: URL, BLEU: 44.1, chr-F: 0.635\ntestset: URL, BLEU: 41.0, chr-F: 0.610\ntestset: URL, BLEU: 0.3, chr-F: 0.014\ntestset: URL, BLEU: 42.0, chr-F: 0.637\ntestset: URL, BLEU: 0.3, chr-F: 0.012\ntestset: URL, BLEU: 40.5, chr-F: 0.612\ntestset: URL, BLEU: 18.8, chr-F: 0.357\ntestset: URL, BLEU: 38.8, chr-F: 0.600",
"### System Info:\n\n\n* hf\\_name: eng-sla\n* source\\_languages: eng\n* target\\_languages: sla\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'bel', 'hrv', 'orv\\_Cyrl', 'mkd', 'bel\\_Latn', 'srp\\_Latn', 'bul\\_Latn', 'ces', 'bos\\_Latn', 'csb\\_Latn', 'dsb', 'hsb', 'rus', 'srp\\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: sla\n* short\\_pair: en-sla\n* chrF2\\_score: 0.61\n* bleu: 41.0\n* brevity\\_penalty: 0.976\n* ref\\_len: 64809.0\n* src\\_name: English\n* tgt\\_name: Slavic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: sla\n* prefer\\_old: False\n* long\\_pair: eng-sla\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
71,
974,
555
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #be #hr #mk #cs #ru #pl #bg #uk #sl #sla #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-sla\n\n\n* source group: English\n* target group: Slavic languages\n* OPUS readme: eng-sla\n* model: transformer\n* source language(s): eng\n* target language(s): bel bel\\_Latn bos\\_Latn bul bul\\_Latn ces csb\\_Latn dsb hrv hsb mkd orv\\_Cyrl pol rue rus slv srp\\_Cyrl srp\\_Latn ukr\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.1, chr-F: 0.484\ntestset: URL, BLEU: 17.7, chr-F: 0.461\ntestset: URL, BLEU: 19.1, chr-F: 0.479\ntestset: URL, BLEU: 19.3, chr-F: 0.483\ntestset: URL, BLEU: 20.4, chr-F: 0.486\ntestset: URL, BLEU: 18.3, chr-F: 0.461\ntestset: URL, BLEU: 27.4, chr-F: 0.551\ntestset: URL, BLEU: 21.5, chr-F: 0.489\ntestset: URL, BLEU: 20.9, chr-F: 0.490\ntestset: URL, BLEU: 21.1, chr-F: 0.496\ntestset: URL, BLEU: 24.5, chr-F: 0.536\ntestset: URL, BLEU: 23.6, chr-F: 0.515\ntestset: URL, BLEU: 23.0, chr-F: 0.519\ntestset: URL, BLEU: 19.2, chr-F: 0.474\ntestset: URL, BLEU: 25.0, chr-F: 0.541\ntestset: URL, BLEU: 19.3, chr-F: 0.479\ntestset: URL, BLEU: 22.3, chr-F: 0.526\ntestset: URL, BLEU: 20.4, chr-F: 0.486\ntestset: URL, BLEU: 24.0, chr-F: 0.506\ntestset: URL, BLEU: 22.9, chr-F: 0.489\ntestset: URL, BLEU: 46.7, chr-F: 0.652\ntestset: URL, BLEU: 42.7, chr-F: 0.624\ntestset: URL, BLEU: 1.4, chr-F: 0.210\ntestset: URL, BLEU: 1.4, chr-F: 0.165\ntestset: URL, BLEU: 40.3, chr-F: 0.616\ntestset: URL, BLEU: 14.3, chr-F: 0.344\ntestset: URL, BLEU: 44.1, chr-F: 0.635\ntestset: URL, BLEU: 41.0, chr-F: 0.610\ntestset: URL, BLEU: 0.3, chr-F: 0.014\ntestset: URL, BLEU: 42.0, chr-F: 0.637\ntestset: URL, BLEU: 0.3, chr-F: 0.012\ntestset: URL, BLEU: 40.5, chr-F: 0.612\ntestset: URL, BLEU: 18.8, chr-F: 0.357\ntestset: URL, BLEU: 38.8, chr-F: 0.600### System Info:\n\n\n* hf\\_name: eng-sla\n* source\\_languages: eng\n* target\\_languages: sla\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'bel', 'hrv', 'orv\\_Cyrl', 'mkd', 'bel\\_Latn', 'srp\\_Latn', 'bul\\_Latn', 'ces', 'bos\\_Latn', 'csb\\_Latn', 'dsb', 'hsb', 'rus', 'srp\\_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: sla\n* short\\_pair: en-sla\n* chrF2\\_score: 0.61\n* bleu: 41.0\n* brevity\\_penalty: 0.976\n* ref\\_len: 64809.0\n* src\\_name: English\n* tgt\\_name: Slavic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: sla\n* prefer\\_old: False\n* long\\_pair: eng-sla\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-sm
* source languages: en
* target languages: sm
* OPUS readme: [en-sm](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sm/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sm/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sm/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sm/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.sm | 40.1 | 0.585 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-sm | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sm",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #sm #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-sm
* source languages: en
* target languages: sm
* OPUS readme: en-sm
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.1, chr-F: 0.585
| [
"### opus-mt-en-sm\n\n\n* source languages: en\n* target languages: sm\n* OPUS readme: en-sm\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.585"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sm #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-sm\n\n\n* source languages: en\n* target languages: sm\n* OPUS readme: en-sm\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.585"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sm #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-sm\n\n\n* source languages: en\n* target languages: sm\n* OPUS readme: en-sm\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.585"
] |
translation | transformers |
### opus-mt-en-sn
* source languages: en
* target languages: sn
* OPUS readme: [en-sn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sn/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sn/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sn/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.sn | 38.0 | 0.646 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-sn | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #sn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-sn
* source languages: en
* target languages: sn
* OPUS readme: en-sn
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.0, chr-F: 0.646
| [
"### opus-mt-en-sn\n\n\n* source languages: en\n* target languages: sn\n* OPUS readme: en-sn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.0, chr-F: 0.646"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-sn\n\n\n* source languages: en\n* target languages: sn\n* OPUS readme: en-sn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.0, chr-F: 0.646"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-sn\n\n\n* source languages: en\n* target languages: sn\n* OPUS readme: en-sn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.0, chr-F: 0.646"
] |
translation | transformers |
### opus-mt-en-sq
* source languages: en
* target languages: sq
* OPUS readme: [en-sq](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sq/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sq/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sq/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sq/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.sq | 46.5 | 0.669 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-sq | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sq",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #sq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-sq
* source languages: en
* target languages: sq
* OPUS readme: en-sq
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 46.5, chr-F: 0.669
| [
"### opus-mt-en-sq\n\n\n* source languages: en\n* target languages: sq\n* OPUS readme: en-sq\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.5, chr-F: 0.669"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-sq\n\n\n* source languages: en\n* target languages: sq\n* OPUS readme: en-sq\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.5, chr-F: 0.669"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sq #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-sq\n\n\n* source languages: en\n* target languages: sq\n* OPUS readme: en-sq\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.5, chr-F: 0.669"
] |
translation | transformers |
### opus-mt-en-ss
* source languages: en
* target languages: ss
* OPUS readme: [en-ss](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ss/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ss/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ss/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ss/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ss | 25.7 | 0.541 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ss | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ss",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ss #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ss
* source languages: en
* target languages: ss
* OPUS readme: en-ss
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.7, chr-F: 0.541
| [
"### opus-mt-en-ss\n\n\n* source languages: en\n* target languages: ss\n* OPUS readme: en-ss\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.541"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ss #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ss\n\n\n* source languages: en\n* target languages: ss\n* OPUS readme: en-ss\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.541"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ss #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ss\n\n\n* source languages: en\n* target languages: ss\n* OPUS readme: en-ss\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.541"
] |
translation | transformers |
### opus-mt-en-st
* source languages: en
* target languages: st
* OPUS readme: [en-st](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-st/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-st/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-st/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-st/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.st | 49.8 | 0.665 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-st | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"st",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #st #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-st
* source languages: en
* target languages: st
* OPUS readme: en-st
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 49.8, chr-F: 0.665
| [
"### opus-mt-en-st\n\n\n* source languages: en\n* target languages: st\n* OPUS readme: en-st\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.8, chr-F: 0.665"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #st #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-st\n\n\n* source languages: en\n* target languages: st\n* OPUS readme: en-st\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.8, chr-F: 0.665"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #st #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-st\n\n\n* source languages: en\n* target languages: st\n* OPUS readme: en-st\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.8, chr-F: 0.665"
] |
translation | transformers |
### opus-mt-en-sv
* source languages: en
* target languages: sv
* OPUS readme: [en-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sv/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.sv | 60.1 | 0.736 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-sv | null | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-sv
* source languages: en
* target languages: sv
* OPUS readme: en-sv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 60.1, chr-F: 0.736
| [
"### opus-mt-en-sv\n\n\n* source languages: en\n* target languages: sv\n* OPUS readme: en-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.1, chr-F: 0.736"
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-sv\n\n\n* source languages: en\n* target languages: sv\n* OPUS readme: en-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.1, chr-F: 0.736"
] | [
53,
106
] | [
"TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #en #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-sv\n\n\n* source languages: en\n* target languages: sv\n* OPUS readme: en-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.1, chr-F: 0.736"
] |
translation | transformers |
### opus-mt-en-sw
* source languages: en
* target languages: sw
* OPUS readme: [en-sw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.en.sw | 24.2 | 0.527 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-sw | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"sw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #sw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-sw
* source languages: en
* target languages: sw
* OPUS readme: en-sw
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 24.2, chr-F: 0.527
| [
"### opus-mt-en-sw\n\n\n* source languages: en\n* target languages: sw\n* OPUS readme: en-sw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.2, chr-F: 0.527"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-sw\n\n\n* source languages: en\n* target languages: sw\n* OPUS readme: en-sw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.2, chr-F: 0.527"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #sw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-sw\n\n\n* source languages: en\n* target languages: sw\n* OPUS readme: en-sw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.2, chr-F: 0.527"
] |
translation | transformers |
### opus-mt-en-swc
* source languages: en
* target languages: swc
* OPUS readme: [en-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-swc/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.swc | 40.1 | 0.613 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-swc | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"swc",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #swc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-swc
* source languages: en
* target languages: swc
* OPUS readme: en-swc
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 40.1, chr-F: 0.613
| [
"### opus-mt-en-swc\n\n\n* source languages: en\n* target languages: swc\n* OPUS readme: en-swc\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.613"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #swc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-swc\n\n\n* source languages: en\n* target languages: swc\n* OPUS readme: en-swc\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.613"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #swc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-swc\n\n\n* source languages: en\n* target languages: swc\n* OPUS readme: en-swc\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.1, chr-F: 0.613"
] |
translation | transformers |
### opus-mt-en-tdt
* source languages: en
* target languages: tdt
* OPUS readme: [en-tdt](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tdt/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tdt/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tdt/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tdt/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tdt | 23.8 | 0.416 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-tdt | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tdt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #tdt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-tdt
* source languages: en
* target languages: tdt
* OPUS readme: en-tdt
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.8, chr-F: 0.416
| [
"### opus-mt-en-tdt\n\n\n* source languages: en\n* target languages: tdt\n* OPUS readme: en-tdt\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.8, chr-F: 0.416"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tdt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-tdt\n\n\n* source languages: en\n* target languages: tdt\n* OPUS readme: en-tdt\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.8, chr-F: 0.416"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tdt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-tdt\n\n\n* source languages: en\n* target languages: tdt\n* OPUS readme: en-tdt\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.8, chr-F: 0.416"
] |
translation | transformers |
### opus-mt-en-ti
* source languages: en
* target languages: ti
* OPUS readme: [en-ti](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ti/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ti/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ti/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ti/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ti | 25.3 | 0.382 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ti | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"ti",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #en #ti #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ti
* source languages: en
* target languages: ti
* OPUS readme: en-ti
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 25.3, chr-F: 0.382
| [
"### opus-mt-en-ti\n\n\n* source languages: en\n* target languages: ti\n* OPUS readme: en-ti\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.382"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #en #ti #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ti\n\n\n* source languages: en\n* target languages: ti\n* OPUS readme: en-ti\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.382"
] | [
55,
106
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #en #ti #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ti\n\n\n* source languages: en\n* target languages: ti\n* OPUS readme: en-ti\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.3, chr-F: 0.382"
] |
translation | transformers |
### opus-mt-en-tiv
* source languages: en
* target languages: tiv
* OPUS readme: [en-tiv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tiv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tiv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tiv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tiv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tiv | 31.6 | 0.497 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-tiv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tiv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #tiv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-tiv
* source languages: en
* target languages: tiv
* OPUS readme: en-tiv
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.6, chr-F: 0.497
| [
"### opus-mt-en-tiv\n\n\n* source languages: en\n* target languages: tiv\n* OPUS readme: en-tiv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.497"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tiv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-tiv\n\n\n* source languages: en\n* target languages: tiv\n* OPUS readme: en-tiv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.497"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tiv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-tiv\n\n\n* source languages: en\n* target languages: tiv\n* OPUS readme: en-tiv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.497"
] |
translation | transformers |
### opus-mt-en-tl
* source languages: en
* target languages: tl
* OPUS readme: [en-tl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tl/README.md)
* dataset: opus+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tl/opus+bt-2020-02-26.zip)
* test set translations: [opus+bt-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tl/opus+bt-2020-02-26.test.txt)
* test set scores: [opus+bt-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tl/opus+bt-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.tl | 26.6 | 0.577 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-tl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #tl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-tl
* source languages: en
* target languages: tl
* OPUS readme: en-tl
* dataset: opus+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: opus+URL
* test set translations: opus+URL
* test set scores: opus+URL
Benchmarks
----------
testset: URL, BLEU: 26.6, chr-F: 0.577
| [
"### opus-mt-en-tl\n\n\n* source languages: en\n* target languages: tl\n* OPUS readme: en-tl\n* dataset: opus+bt\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.6, chr-F: 0.577"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-tl\n\n\n* source languages: en\n* target languages: tl\n* OPUS readme: en-tl\n* dataset: opus+bt\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.6, chr-F: 0.577"
] | [
52,
117
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-tl\n\n\n* source languages: en\n* target languages: tl\n* OPUS readme: en-tl\n* dataset: opus+bt\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: opus+URL\n* test set translations: opus+URL\n* test set scores: opus+URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.6, chr-F: 0.577"
] |
translation | transformers |
### opus-mt-en-tll
* source languages: en
* target languages: tll
* OPUS readme: [en-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tll/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tll/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tll/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tll | 33.6 | 0.556 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-tll | null | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"en",
"tll",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #en #tll #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-tll
* source languages: en
* target languages: tll
* OPUS readme: en-tll
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 33.6, chr-F: 0.556
| [
"### opus-mt-en-tll\n\n\n* source languages: en\n* target languages: tll\n* OPUS readme: en-tll\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.6, chr-F: 0.556"
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #en #tll #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-tll\n\n\n* source languages: en\n* target languages: tll\n* OPUS readme: en-tll\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.6, chr-F: 0.556"
] | [
56,
109
] | [
"TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #en #tll #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-tll\n\n\n* source languages: en\n* target languages: tll\n* OPUS readme: en-tll\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.6, chr-F: 0.556"
] |
translation | transformers |
### opus-mt-en-tn
* source languages: en
* target languages: tn
* OPUS readme: [en-tn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tn/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tn | 45.5 | 0.636 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-tn | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #tn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-tn
* source languages: en
* target languages: tn
* OPUS readme: en-tn
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 45.5, chr-F: 0.636
| [
"### opus-mt-en-tn\n\n\n* source languages: en\n* target languages: tn\n* OPUS readme: en-tn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.5, chr-F: 0.636"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-tn\n\n\n* source languages: en\n* target languages: tn\n* OPUS readme: en-tn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.5, chr-F: 0.636"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tn #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-tn\n\n\n* source languages: en\n* target languages: tn\n* OPUS readme: en-tn\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.5, chr-F: 0.636"
] |
translation | transformers |
### opus-mt-en-to
* source languages: en
* target languages: to
* OPUS readme: [en-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-to/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-to/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.to | 56.3 | 0.689 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-to | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"to",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #to #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-to
* source languages: en
* target languages: to
* OPUS readme: en-to
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 56.3, chr-F: 0.689
| [
"### opus-mt-en-to\n\n\n* source languages: en\n* target languages: to\n* OPUS readme: en-to\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.3, chr-F: 0.689"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #to #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-to\n\n\n* source languages: en\n* target languages: to\n* OPUS readme: en-to\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.3, chr-F: 0.689"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #to #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-to\n\n\n* source languages: en\n* target languages: to\n* OPUS readme: en-to\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.3, chr-F: 0.689"
] |
translation | transformers |
### opus-mt-en-toi
* source languages: en
* target languages: toi
* OPUS readme: [en-toi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-toi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-toi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-toi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-toi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.toi | 32.8 | 0.598 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-toi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"toi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #toi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-toi
* source languages: en
* target languages: toi
* OPUS readme: en-toi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 32.8, chr-F: 0.598
| [
"### opus-mt-en-toi\n\n\n* source languages: en\n* target languages: toi\n* OPUS readme: en-toi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.8, chr-F: 0.598"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #toi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-toi\n\n\n* source languages: en\n* target languages: toi\n* OPUS readme: en-toi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.8, chr-F: 0.598"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #toi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-toi\n\n\n* source languages: en\n* target languages: toi\n* OPUS readme: en-toi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.8, chr-F: 0.598"
] |
translation | transformers |
### opus-mt-en-tpi
* source languages: en
* target languages: tpi
* OPUS readme: [en-tpi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tpi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tpi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tpi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tpi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tpi | 38.7 | 0.568 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-tpi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tpi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #tpi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-tpi
* source languages: en
* target languages: tpi
* OPUS readme: en-tpi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.7, chr-F: 0.568
| [
"### opus-mt-en-tpi\n\n\n* source languages: en\n* target languages: tpi\n* OPUS readme: en-tpi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.7, chr-F: 0.568"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tpi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-tpi\n\n\n* source languages: en\n* target languages: tpi\n* OPUS readme: en-tpi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.7, chr-F: 0.568"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tpi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-tpi\n\n\n* source languages: en\n* target languages: tpi\n* OPUS readme: en-tpi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.7, chr-F: 0.568"
] |
translation | transformers |
### eng-trk
* source group: English
* target group: Turkic languages
* OPUS readme: [eng-trk](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-trk/README.md)
* model: transformer
* source language(s): eng
* target language(s): aze_Latn bak chv crh crh_Latn kaz_Cyrl kaz_Latn kir_Cyrl kjh kum ota_Arab ota_Latn sah tat tat_Arab tat_Latn tuk tuk_Latn tur tyv uig_Arab uig_Cyrl uzb_Cyrl uzb_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-entr-engtur.eng.tur | 10.1 | 0.437 |
| newstest2016-entr-engtur.eng.tur | 9.2 | 0.410 |
| newstest2017-entr-engtur.eng.tur | 9.0 | 0.410 |
| newstest2018-entr-engtur.eng.tur | 9.2 | 0.413 |
| Tatoeba-test.eng-aze.eng.aze | 26.8 | 0.577 |
| Tatoeba-test.eng-bak.eng.bak | 7.6 | 0.308 |
| Tatoeba-test.eng-chv.eng.chv | 4.3 | 0.270 |
| Tatoeba-test.eng-crh.eng.crh | 8.1 | 0.330 |
| Tatoeba-test.eng-kaz.eng.kaz | 11.1 | 0.359 |
| Tatoeba-test.eng-kir.eng.kir | 28.6 | 0.524 |
| Tatoeba-test.eng-kjh.eng.kjh | 1.0 | 0.041 |
| Tatoeba-test.eng-kum.eng.kum | 2.2 | 0.075 |
| Tatoeba-test.eng.multi | 19.9 | 0.455 |
| Tatoeba-test.eng-ota.eng.ota | 0.5 | 0.065 |
| Tatoeba-test.eng-sah.eng.sah | 0.7 | 0.030 |
| Tatoeba-test.eng-tat.eng.tat | 9.7 | 0.316 |
| Tatoeba-test.eng-tuk.eng.tuk | 5.9 | 0.317 |
| Tatoeba-test.eng-tur.eng.tur | 34.6 | 0.623 |
| Tatoeba-test.eng-tyv.eng.tyv | 5.4 | 0.210 |
| Tatoeba-test.eng-uig.eng.uig | 0.1 | 0.155 |
| Tatoeba-test.eng-uzb.eng.uzb | 3.4 | 0.275 |
### System Info:
- hf_name: eng-trk
- source_languages: eng
- target_languages: trk
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-trk/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'tt', 'cv', 'tk', 'tr', 'ba', 'trk']
- src_constituents: {'eng'}
- tgt_constituents: {'kir_Cyrl', 'tat_Latn', 'tat', 'chv', 'uzb_Cyrl', 'kaz_Latn', 'aze_Latn', 'crh', 'kjh', 'uzb_Latn', 'ota_Arab', 'tuk_Latn', 'tuk', 'tat_Arab', 'sah', 'tyv', 'tur', 'uig_Arab', 'crh_Latn', 'kaz_Cyrl', 'uig_Cyrl', 'kum', 'ota_Latn', 'bak'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-trk/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: trk
- short_pair: en-trk
- chrF2_score: 0.455
- bleu: 19.9
- brevity_penalty: 1.0
- ref_len: 57072.0
- src_name: English
- tgt_name: Turkic languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: trk
- prefer_old: False
- long_pair: eng-trk
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "tt", "cv", "tk", "tr", "ba", "trk"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-trk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tt",
"cv",
"tk",
"tr",
"ba",
"trk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"tt",
"cv",
"tk",
"tr",
"ba",
"trk"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #tt #cv #tk #tr #ba #trk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-trk
* source group: English
* target group: Turkic languages
* OPUS readme: eng-trk
* model: transformer
* source language(s): eng
* target language(s): aze\_Latn bak chv crh crh\_Latn kaz\_Cyrl kaz\_Latn kir\_Cyrl kjh kum ota\_Arab ota\_Latn sah tat tat\_Arab tat\_Latn tuk tuk\_Latn tur tyv uig\_Arab uig\_Cyrl uzb\_Cyrl uzb\_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 10.1, chr-F: 0.437
testset: URL, BLEU: 9.2, chr-F: 0.410
testset: URL, BLEU: 9.0, chr-F: 0.410
testset: URL, BLEU: 9.2, chr-F: 0.413
testset: URL, BLEU: 26.8, chr-F: 0.577
testset: URL, BLEU: 7.6, chr-F: 0.308
testset: URL, BLEU: 4.3, chr-F: 0.270
testset: URL, BLEU: 8.1, chr-F: 0.330
testset: URL, BLEU: 11.1, chr-F: 0.359
testset: URL, BLEU: 28.6, chr-F: 0.524
testset: URL, BLEU: 1.0, chr-F: 0.041
testset: URL, BLEU: 2.2, chr-F: 0.075
testset: URL, BLEU: 19.9, chr-F: 0.455
testset: URL, BLEU: 0.5, chr-F: 0.065
testset: URL, BLEU: 0.7, chr-F: 0.030
testset: URL, BLEU: 9.7, chr-F: 0.316
testset: URL, BLEU: 5.9, chr-F: 0.317
testset: URL, BLEU: 34.6, chr-F: 0.623
testset: URL, BLEU: 5.4, chr-F: 0.210
testset: URL, BLEU: 0.1, chr-F: 0.155
testset: URL, BLEU: 3.4, chr-F: 0.275
### System Info:
* hf\_name: eng-trk
* source\_languages: eng
* target\_languages: trk
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'tt', 'cv', 'tk', 'tr', 'ba', 'trk']
* src\_constituents: {'eng'}
* tgt\_constituents: {'kir\_Cyrl', 'tat\_Latn', 'tat', 'chv', 'uzb\_Cyrl', 'kaz\_Latn', 'aze\_Latn', 'crh', 'kjh', 'uzb\_Latn', 'ota\_Arab', 'tuk\_Latn', 'tuk', 'tat\_Arab', 'sah', 'tyv', 'tur', 'uig\_Arab', 'crh\_Latn', 'kaz\_Cyrl', 'uig\_Cyrl', 'kum', 'ota\_Latn', 'bak'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: trk
* short\_pair: en-trk
* chrF2\_score: 0.455
* bleu: 19.9
* brevity\_penalty: 1.0
* ref\_len: 57072.0
* src\_name: English
* tgt\_name: Turkic languages
* train\_date: 2020-08-01
* src\_alpha2: en
* tgt\_alpha2: trk
* prefer\_old: False
* long\_pair: eng-trk
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-trk\n\n\n* source group: English\n* target group: Turkic languages\n* OPUS readme: eng-trk\n* model: transformer\n* source language(s): eng\n* target language(s): aze\\_Latn bak chv crh crh\\_Latn kaz\\_Cyrl kaz\\_Latn kir\\_Cyrl kjh kum ota\\_Arab ota\\_Latn sah tat tat\\_Arab tat\\_Latn tuk tuk\\_Latn tur tyv uig\\_Arab uig\\_Cyrl uzb\\_Cyrl uzb\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.1, chr-F: 0.437\ntestset: URL, BLEU: 9.2, chr-F: 0.410\ntestset: URL, BLEU: 9.0, chr-F: 0.410\ntestset: URL, BLEU: 9.2, chr-F: 0.413\ntestset: URL, BLEU: 26.8, chr-F: 0.577\ntestset: URL, BLEU: 7.6, chr-F: 0.308\ntestset: URL, BLEU: 4.3, chr-F: 0.270\ntestset: URL, BLEU: 8.1, chr-F: 0.330\ntestset: URL, BLEU: 11.1, chr-F: 0.359\ntestset: URL, BLEU: 28.6, chr-F: 0.524\ntestset: URL, BLEU: 1.0, chr-F: 0.041\ntestset: URL, BLEU: 2.2, chr-F: 0.075\ntestset: URL, BLEU: 19.9, chr-F: 0.455\ntestset: URL, BLEU: 0.5, chr-F: 0.065\ntestset: URL, BLEU: 0.7, chr-F: 0.030\ntestset: URL, BLEU: 9.7, chr-F: 0.316\ntestset: URL, BLEU: 5.9, chr-F: 0.317\ntestset: URL, BLEU: 34.6, chr-F: 0.623\ntestset: URL, BLEU: 5.4, chr-F: 0.210\ntestset: URL, BLEU: 0.1, chr-F: 0.155\ntestset: URL, BLEU: 3.4, chr-F: 0.275",
"### System Info:\n\n\n* hf\\_name: eng-trk\n* source\\_languages: eng\n* target\\_languages: trk\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'tt', 'cv', 'tk', 'tr', 'ba', 'trk']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'kir\\_Cyrl', 'tat\\_Latn', 'tat', 'chv', 'uzb\\_Cyrl', 'kaz\\_Latn', 'aze\\_Latn', 'crh', 'kjh', 'uzb\\_Latn', 'ota\\_Arab', 'tuk\\_Latn', 'tuk', 'tat\\_Arab', 'sah', 'tyv', 'tur', 'uig\\_Arab', 'crh\\_Latn', 'kaz\\_Cyrl', 'uig\\_Cyrl', 'kum', 'ota\\_Latn', 'bak'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: trk\n* short\\_pair: en-trk\n* chrF2\\_score: 0.455\n* bleu: 19.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 57072.0\n* src\\_name: English\n* tgt\\_name: Turkic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: trk\n* prefer\\_old: False\n* long\\_pair: eng-trk\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tt #cv #tk #tr #ba #trk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-trk\n\n\n* source group: English\n* target group: Turkic languages\n* OPUS readme: eng-trk\n* model: transformer\n* source language(s): eng\n* target language(s): aze\\_Latn bak chv crh crh\\_Latn kaz\\_Cyrl kaz\\_Latn kir\\_Cyrl kjh kum ota\\_Arab ota\\_Latn sah tat tat\\_Arab tat\\_Latn tuk tuk\\_Latn tur tyv uig\\_Arab uig\\_Cyrl uzb\\_Cyrl uzb\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.1, chr-F: 0.437\ntestset: URL, BLEU: 9.2, chr-F: 0.410\ntestset: URL, BLEU: 9.0, chr-F: 0.410\ntestset: URL, BLEU: 9.2, chr-F: 0.413\ntestset: URL, BLEU: 26.8, chr-F: 0.577\ntestset: URL, BLEU: 7.6, chr-F: 0.308\ntestset: URL, BLEU: 4.3, chr-F: 0.270\ntestset: URL, BLEU: 8.1, chr-F: 0.330\ntestset: URL, BLEU: 11.1, chr-F: 0.359\ntestset: URL, BLEU: 28.6, chr-F: 0.524\ntestset: URL, BLEU: 1.0, chr-F: 0.041\ntestset: URL, BLEU: 2.2, chr-F: 0.075\ntestset: URL, BLEU: 19.9, chr-F: 0.455\ntestset: URL, BLEU: 0.5, chr-F: 0.065\ntestset: URL, BLEU: 0.7, chr-F: 0.030\ntestset: URL, BLEU: 9.7, chr-F: 0.316\ntestset: URL, BLEU: 5.9, chr-F: 0.317\ntestset: URL, BLEU: 34.6, chr-F: 0.623\ntestset: URL, BLEU: 5.4, chr-F: 0.210\ntestset: URL, BLEU: 0.1, chr-F: 0.155\ntestset: URL, BLEU: 3.4, chr-F: 0.275",
"### System Info:\n\n\n* hf\\_name: eng-trk\n* source\\_languages: eng\n* target\\_languages: trk\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'tt', 'cv', 'tk', 'tr', 'ba', 'trk']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'kir\\_Cyrl', 'tat\\_Latn', 'tat', 'chv', 'uzb\\_Cyrl', 'kaz\\_Latn', 'aze\\_Latn', 'crh', 'kjh', 'uzb\\_Latn', 'ota\\_Arab', 'tuk\\_Latn', 'tuk', 'tat\\_Arab', 'sah', 'tyv', 'tur', 'uig\\_Arab', 'crh\\_Latn', 'kaz\\_Cyrl', 'uig\\_Cyrl', 'kum', 'ota\\_Latn', 'bak'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: trk\n* short\\_pair: en-trk\n* chrF2\\_score: 0.455\n* bleu: 19.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 57072.0\n* src\\_name: English\n* tgt\\_name: Turkic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: trk\n* prefer\\_old: False\n* long\\_pair: eng-trk\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
63,
717,
599
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tt #cv #tk #tr #ba #trk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-trk\n\n\n* source group: English\n* target group: Turkic languages\n* OPUS readme: eng-trk\n* model: transformer\n* source language(s): eng\n* target language(s): aze\\_Latn bak chv crh crh\\_Latn kaz\\_Cyrl kaz\\_Latn kir\\_Cyrl kjh kum ota\\_Arab ota\\_Latn sah tat tat\\_Arab tat\\_Latn tuk tuk\\_Latn tur tyv uig\\_Arab uig\\_Cyrl uzb\\_Cyrl uzb\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.1, chr-F: 0.437\ntestset: URL, BLEU: 9.2, chr-F: 0.410\ntestset: URL, BLEU: 9.0, chr-F: 0.410\ntestset: URL, BLEU: 9.2, chr-F: 0.413\ntestset: URL, BLEU: 26.8, chr-F: 0.577\ntestset: URL, BLEU: 7.6, chr-F: 0.308\ntestset: URL, BLEU: 4.3, chr-F: 0.270\ntestset: URL, BLEU: 8.1, chr-F: 0.330\ntestset: URL, BLEU: 11.1, chr-F: 0.359\ntestset: URL, BLEU: 28.6, chr-F: 0.524\ntestset: URL, BLEU: 1.0, chr-F: 0.041\ntestset: URL, BLEU: 2.2, chr-F: 0.075\ntestset: URL, BLEU: 19.9, chr-F: 0.455\ntestset: URL, BLEU: 0.5, chr-F: 0.065\ntestset: URL, BLEU: 0.7, chr-F: 0.030\ntestset: URL, BLEU: 9.7, chr-F: 0.316\ntestset: URL, BLEU: 5.9, chr-F: 0.317\ntestset: URL, BLEU: 34.6, chr-F: 0.623\ntestset: URL, BLEU: 5.4, chr-F: 0.210\ntestset: URL, BLEU: 0.1, chr-F: 0.155\ntestset: URL, BLEU: 3.4, chr-F: 0.275### System Info:\n\n\n* hf\\_name: eng-trk\n* source\\_languages: eng\n* target\\_languages: trk\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'tt', 'cv', 'tk', 'tr', 'ba', 'trk']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'kir\\_Cyrl', 'tat\\_Latn', 'tat', 'chv', 'uzb\\_Cyrl', 'kaz\\_Latn', 'aze\\_Latn', 'crh', 'kjh', 'uzb\\_Latn', 'ota\\_Arab', 'tuk\\_Latn', 'tuk', 'tat\\_Arab', 'sah', 'tyv', 'tur', 'uig\\_Arab', 'crh\\_Latn', 'kaz\\_Cyrl', 'uig\\_Cyrl', 'kum', 'ota\\_Latn', 'bak'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: trk\n* short\\_pair: en-trk\n* chrF2\\_score: 0.455\n* bleu: 19.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 57072.0\n* src\\_name: English\n* tgt\\_name: Turkic languages\n* train\\_date: 2020-08-01\n* src\\_alpha2: en\n* tgt\\_alpha2: trk\n* prefer\\_old: False\n* long\\_pair: eng-trk\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-ts
* source languages: en
* target languages: ts
* OPUS readme: [en-ts](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ts/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ts/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ts/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ts/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ts | 43.4 | 0.639 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ts | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ts
* source languages: en
* target languages: ts
* OPUS readme: en-ts
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 43.4, chr-F: 0.639
| [
"### opus-mt-en-ts\n\n\n* source languages: en\n* target languages: ts\n* OPUS readme: en-ts\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.4, chr-F: 0.639"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ts\n\n\n* source languages: en\n* target languages: ts\n* OPUS readme: en-ts\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.4, chr-F: 0.639"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ts\n\n\n* source languages: en\n* target languages: ts\n* OPUS readme: en-ts\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.4, chr-F: 0.639"
] |
translation | transformers |
### eng-tut
* source group: English
* target group: Altaic languages
* OPUS readme: [eng-tut](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tut/README.md)
* model: transformer
* source language(s): eng
* target language(s): aze_Latn bak chv crh crh_Latn kaz_Cyrl kaz_Latn kir_Cyrl kjh kum mon nog ota_Arab ota_Latn sah tat tat_Arab tat_Latn tuk tuk_Latn tur tyv uig_Arab uig_Cyrl uzb_Cyrl uzb_Latn xal
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-entr-engtur.eng.tur | 10.4 | 0.438 |
| newstest2016-entr-engtur.eng.tur | 9.1 | 0.414 |
| newstest2017-entr-engtur.eng.tur | 9.5 | 0.414 |
| newstest2018-entr-engtur.eng.tur | 9.5 | 0.415 |
| Tatoeba-test.eng-aze.eng.aze | 27.2 | 0.580 |
| Tatoeba-test.eng-bak.eng.bak | 5.8 | 0.298 |
| Tatoeba-test.eng-chv.eng.chv | 4.6 | 0.301 |
| Tatoeba-test.eng-crh.eng.crh | 6.5 | 0.342 |
| Tatoeba-test.eng-kaz.eng.kaz | 11.8 | 0.360 |
| Tatoeba-test.eng-kir.eng.kir | 24.6 | 0.499 |
| Tatoeba-test.eng-kjh.eng.kjh | 2.2 | 0.052 |
| Tatoeba-test.eng-kum.eng.kum | 8.0 | 0.229 |
| Tatoeba-test.eng-mon.eng.mon | 10.3 | 0.362 |
| Tatoeba-test.eng.multi | 19.5 | 0.451 |
| Tatoeba-test.eng-nog.eng.nog | 1.5 | 0.117 |
| Tatoeba-test.eng-ota.eng.ota | 0.2 | 0.035 |
| Tatoeba-test.eng-sah.eng.sah | 0.7 | 0.080 |
| Tatoeba-test.eng-tat.eng.tat | 10.8 | 0.320 |
| Tatoeba-test.eng-tuk.eng.tuk | 5.6 | 0.323 |
| Tatoeba-test.eng-tur.eng.tur | 34.2 | 0.623 |
| Tatoeba-test.eng-tyv.eng.tyv | 8.1 | 0.192 |
| Tatoeba-test.eng-uig.eng.uig | 0.1 | 0.158 |
| Tatoeba-test.eng-uzb.eng.uzb | 4.2 | 0.298 |
| Tatoeba-test.eng-xal.eng.xal | 0.1 | 0.061 |
### System Info:
- hf_name: eng-tut
- source_languages: eng
- target_languages: tut
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tut/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'tut']
- src_constituents: {'eng'}
- tgt_constituents: set()
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tut/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: tut
- short_pair: en-tut
- chrF2_score: 0.451
- bleu: 19.5
- brevity_penalty: 1.0
- ref_len: 57472.0
- src_name: English
- tgt_name: Altaic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: tut
- prefer_old: False
- long_pair: eng-tut
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "tut"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-tut | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tut",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"tut"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #tut #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-tut
* source group: English
* target group: Altaic languages
* OPUS readme: eng-tut
* model: transformer
* source language(s): eng
* target language(s): aze\_Latn bak chv crh crh\_Latn kaz\_Cyrl kaz\_Latn kir\_Cyrl kjh kum mon nog ota\_Arab ota\_Latn sah tat tat\_Arab tat\_Latn tuk tuk\_Latn tur tyv uig\_Arab uig\_Cyrl uzb\_Cyrl uzb\_Latn xal
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 10.4, chr-F: 0.438
testset: URL, BLEU: 9.1, chr-F: 0.414
testset: URL, BLEU: 9.5, chr-F: 0.414
testset: URL, BLEU: 9.5, chr-F: 0.415
testset: URL, BLEU: 27.2, chr-F: 0.580
testset: URL, BLEU: 5.8, chr-F: 0.298
testset: URL, BLEU: 4.6, chr-F: 0.301
testset: URL, BLEU: 6.5, chr-F: 0.342
testset: URL, BLEU: 11.8, chr-F: 0.360
testset: URL, BLEU: 24.6, chr-F: 0.499
testset: URL, BLEU: 2.2, chr-F: 0.052
testset: URL, BLEU: 8.0, chr-F: 0.229
testset: URL, BLEU: 10.3, chr-F: 0.362
testset: URL, BLEU: 19.5, chr-F: 0.451
testset: URL, BLEU: 1.5, chr-F: 0.117
testset: URL, BLEU: 0.2, chr-F: 0.035
testset: URL, BLEU: 0.7, chr-F: 0.080
testset: URL, BLEU: 10.8, chr-F: 0.320
testset: URL, BLEU: 5.6, chr-F: 0.323
testset: URL, BLEU: 34.2, chr-F: 0.623
testset: URL, BLEU: 8.1, chr-F: 0.192
testset: URL, BLEU: 0.1, chr-F: 0.158
testset: URL, BLEU: 4.2, chr-F: 0.298
testset: URL, BLEU: 0.1, chr-F: 0.061
### System Info:
* hf\_name: eng-tut
* source\_languages: eng
* target\_languages: tut
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'tut']
* src\_constituents: {'eng'}
* tgt\_constituents: set()
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: tut
* short\_pair: en-tut
* chrF2\_score: 0.451
* bleu: 19.5
* brevity\_penalty: 1.0
* ref\_len: 57472.0
* src\_name: English
* tgt\_name: Altaic languages
* train\_date: 2020-08-02
* src\_alpha2: en
* tgt\_alpha2: tut
* prefer\_old: False
* long\_pair: eng-tut
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-tut\n\n\n* source group: English\n* target group: Altaic languages\n* OPUS readme: eng-tut\n* model: transformer\n* source language(s): eng\n* target language(s): aze\\_Latn bak chv crh crh\\_Latn kaz\\_Cyrl kaz\\_Latn kir\\_Cyrl kjh kum mon nog ota\\_Arab ota\\_Latn sah tat tat\\_Arab tat\\_Latn tuk tuk\\_Latn tur tyv uig\\_Arab uig\\_Cyrl uzb\\_Cyrl uzb\\_Latn xal\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.4, chr-F: 0.438\ntestset: URL, BLEU: 9.1, chr-F: 0.414\ntestset: URL, BLEU: 9.5, chr-F: 0.414\ntestset: URL, BLEU: 9.5, chr-F: 0.415\ntestset: URL, BLEU: 27.2, chr-F: 0.580\ntestset: URL, BLEU: 5.8, chr-F: 0.298\ntestset: URL, BLEU: 4.6, chr-F: 0.301\ntestset: URL, BLEU: 6.5, chr-F: 0.342\ntestset: URL, BLEU: 11.8, chr-F: 0.360\ntestset: URL, BLEU: 24.6, chr-F: 0.499\ntestset: URL, BLEU: 2.2, chr-F: 0.052\ntestset: URL, BLEU: 8.0, chr-F: 0.229\ntestset: URL, BLEU: 10.3, chr-F: 0.362\ntestset: URL, BLEU: 19.5, chr-F: 0.451\ntestset: URL, BLEU: 1.5, chr-F: 0.117\ntestset: URL, BLEU: 0.2, chr-F: 0.035\ntestset: URL, BLEU: 0.7, chr-F: 0.080\ntestset: URL, BLEU: 10.8, chr-F: 0.320\ntestset: URL, BLEU: 5.6, chr-F: 0.323\ntestset: URL, BLEU: 34.2, chr-F: 0.623\ntestset: URL, BLEU: 8.1, chr-F: 0.192\ntestset: URL, BLEU: 0.1, chr-F: 0.158\ntestset: URL, BLEU: 4.2, chr-F: 0.298\ntestset: URL, BLEU: 0.1, chr-F: 0.061",
"### System Info:\n\n\n* hf\\_name: eng-tut\n* source\\_languages: eng\n* target\\_languages: tut\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'tut']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: tut\n* short\\_pair: en-tut\n* chrF2\\_score: 0.451\n* bleu: 19.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 57472.0\n* src\\_name: English\n* tgt\\_name: Altaic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: tut\n* prefer\\_old: False\n* long\\_pair: eng-tut\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tut #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-tut\n\n\n* source group: English\n* target group: Altaic languages\n* OPUS readme: eng-tut\n* model: transformer\n* source language(s): eng\n* target language(s): aze\\_Latn bak chv crh crh\\_Latn kaz\\_Cyrl kaz\\_Latn kir\\_Cyrl kjh kum mon nog ota\\_Arab ota\\_Latn sah tat tat\\_Arab tat\\_Latn tuk tuk\\_Latn tur tyv uig\\_Arab uig\\_Cyrl uzb\\_Cyrl uzb\\_Latn xal\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.4, chr-F: 0.438\ntestset: URL, BLEU: 9.1, chr-F: 0.414\ntestset: URL, BLEU: 9.5, chr-F: 0.414\ntestset: URL, BLEU: 9.5, chr-F: 0.415\ntestset: URL, BLEU: 27.2, chr-F: 0.580\ntestset: URL, BLEU: 5.8, chr-F: 0.298\ntestset: URL, BLEU: 4.6, chr-F: 0.301\ntestset: URL, BLEU: 6.5, chr-F: 0.342\ntestset: URL, BLEU: 11.8, chr-F: 0.360\ntestset: URL, BLEU: 24.6, chr-F: 0.499\ntestset: URL, BLEU: 2.2, chr-F: 0.052\ntestset: URL, BLEU: 8.0, chr-F: 0.229\ntestset: URL, BLEU: 10.3, chr-F: 0.362\ntestset: URL, BLEU: 19.5, chr-F: 0.451\ntestset: URL, BLEU: 1.5, chr-F: 0.117\ntestset: URL, BLEU: 0.2, chr-F: 0.035\ntestset: URL, BLEU: 0.7, chr-F: 0.080\ntestset: URL, BLEU: 10.8, chr-F: 0.320\ntestset: URL, BLEU: 5.6, chr-F: 0.323\ntestset: URL, BLEU: 34.2, chr-F: 0.623\ntestset: URL, BLEU: 8.1, chr-F: 0.192\ntestset: URL, BLEU: 0.1, chr-F: 0.158\ntestset: URL, BLEU: 4.2, chr-F: 0.298\ntestset: URL, BLEU: 0.1, chr-F: 0.061",
"### System Info:\n\n\n* hf\\_name: eng-tut\n* source\\_languages: eng\n* target\\_languages: tut\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'tut']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: tut\n* short\\_pair: en-tut\n* chrF2\\_score: 0.451\n* bleu: 19.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 57472.0\n* src\\_name: English\n* tgt\\_name: Altaic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: tut\n* prefer\\_old: False\n* long\\_pair: eng-tut\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
789,
397
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tut #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-tut\n\n\n* source group: English\n* target group: Altaic languages\n* OPUS readme: eng-tut\n* model: transformer\n* source language(s): eng\n* target language(s): aze\\_Latn bak chv crh crh\\_Latn kaz\\_Cyrl kaz\\_Latn kir\\_Cyrl kjh kum mon nog ota\\_Arab ota\\_Latn sah tat tat\\_Arab tat\\_Latn tuk tuk\\_Latn tur tyv uig\\_Arab uig\\_Cyrl uzb\\_Cyrl uzb\\_Latn xal\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 10.4, chr-F: 0.438\ntestset: URL, BLEU: 9.1, chr-F: 0.414\ntestset: URL, BLEU: 9.5, chr-F: 0.414\ntestset: URL, BLEU: 9.5, chr-F: 0.415\ntestset: URL, BLEU: 27.2, chr-F: 0.580\ntestset: URL, BLEU: 5.8, chr-F: 0.298\ntestset: URL, BLEU: 4.6, chr-F: 0.301\ntestset: URL, BLEU: 6.5, chr-F: 0.342\ntestset: URL, BLEU: 11.8, chr-F: 0.360\ntestset: URL, BLEU: 24.6, chr-F: 0.499\ntestset: URL, BLEU: 2.2, chr-F: 0.052\ntestset: URL, BLEU: 8.0, chr-F: 0.229\ntestset: URL, BLEU: 10.3, chr-F: 0.362\ntestset: URL, BLEU: 19.5, chr-F: 0.451\ntestset: URL, BLEU: 1.5, chr-F: 0.117\ntestset: URL, BLEU: 0.2, chr-F: 0.035\ntestset: URL, BLEU: 0.7, chr-F: 0.080\ntestset: URL, BLEU: 10.8, chr-F: 0.320\ntestset: URL, BLEU: 5.6, chr-F: 0.323\ntestset: URL, BLEU: 34.2, chr-F: 0.623\ntestset: URL, BLEU: 8.1, chr-F: 0.192\ntestset: URL, BLEU: 0.1, chr-F: 0.158\ntestset: URL, BLEU: 4.2, chr-F: 0.298\ntestset: URL, BLEU: 0.1, chr-F: 0.061### System Info:\n\n\n* hf\\_name: eng-tut\n* source\\_languages: eng\n* target\\_languages: tut\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'tut']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: set()\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: tut\n* short\\_pair: en-tut\n* chrF2\\_score: 0.451\n* bleu: 19.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 57472.0\n* src\\_name: English\n* tgt\\_name: Altaic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: tut\n* prefer\\_old: False\n* long\\_pair: eng-tut\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-tvl
* source languages: en
* target languages: tvl
* OPUS readme: [en-tvl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tvl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tvl/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tvl | 46.9 | 0.625 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-tvl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tvl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #tvl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-tvl
* source languages: en
* target languages: tvl
* OPUS readme: en-tvl
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 46.9, chr-F: 0.625
| [
"### opus-mt-en-tvl\n\n\n* source languages: en\n* target languages: tvl\n* OPUS readme: en-tvl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.9, chr-F: 0.625"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tvl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-tvl\n\n\n* source languages: en\n* target languages: tvl\n* OPUS readme: en-tvl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.9, chr-F: 0.625"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tvl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-tvl\n\n\n* source languages: en\n* target languages: tvl\n* OPUS readme: en-tvl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.9, chr-F: 0.625"
] |
translation | transformers |
### opus-mt-en-tw
* source languages: en
* target languages: tw
* OPUS readme: [en-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tw | 38.2 | 0.577 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-tw | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"tw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #tw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-tw
* source languages: en
* target languages: tw
* OPUS readme: en-tw
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 38.2, chr-F: 0.577
| [
"### opus-mt-en-tw\n\n\n* source languages: en\n* target languages: tw\n* OPUS readme: en-tw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.577"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-tw\n\n\n* source languages: en\n* target languages: tw\n* OPUS readme: en-tw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.577"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #tw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-tw\n\n\n* source languages: en\n* target languages: tw\n* OPUS readme: en-tw\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.577"
] |
translation | transformers |
### opus-mt-en-ty
* source languages: en
* target languages: ty
* OPUS readme: [en-ty](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ty/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ty/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ty/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ty/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.ty | 46.8 | 0.619 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ty | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ty",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ty #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-ty
* source languages: en
* target languages: ty
* OPUS readme: en-ty
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 46.8, chr-F: 0.619
| [
"### opus-mt-en-ty\n\n\n* source languages: en\n* target languages: ty\n* OPUS readme: en-ty\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.8, chr-F: 0.619"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ty #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-ty\n\n\n* source languages: en\n* target languages: ty\n* OPUS readme: en-ty\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.8, chr-F: 0.619"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ty #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-ty\n\n\n* source languages: en\n* target languages: ty\n* OPUS readme: en-ty\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.8, chr-F: 0.619"
] |
translation | transformers |
### opus-mt-en-uk
* source languages: en
* target languages: uk
* OPUS readme: [en-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-uk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-uk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.uk | 50.2 | 0.674 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-uk | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"uk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-uk
* source languages: en
* target languages: uk
* OPUS readme: en-uk
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 50.2, chr-F: 0.674
| [
"### opus-mt-en-uk\n\n\n* source languages: en\n* target languages: uk\n* OPUS readme: en-uk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.2, chr-F: 0.674"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-uk\n\n\n* source languages: en\n* target languages: uk\n* OPUS readme: en-uk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.2, chr-F: 0.674"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-uk\n\n\n* source languages: en\n* target languages: uk\n* OPUS readme: en-uk\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.2, chr-F: 0.674"
] |
translation | transformers |
### opus-mt-en-umb
* source languages: en
* target languages: umb
* OPUS readme: [en-umb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-umb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-umb/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-umb/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-umb/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.umb | 28.6 | 0.510 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-umb | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"umb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #umb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-umb
* source languages: en
* target languages: umb
* OPUS readme: en-umb
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.6, chr-F: 0.510
| [
"### opus-mt-en-umb\n\n\n* source languages: en\n* target languages: umb\n* OPUS readme: en-umb\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.510"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #umb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-umb\n\n\n* source languages: en\n* target languages: umb\n* OPUS readme: en-umb\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.510"
] | [
52,
108
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #umb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-umb\n\n\n* source languages: en\n* target languages: umb\n* OPUS readme: en-umb\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.510"
] |
translation | transformers |
### eng-urd
* source group: English
* target group: Urdu
* OPUS readme: [eng-urd](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urd/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): urd
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.urd | 12.1 | 0.390 |
### System Info:
- hf_name: eng-urd
- source_languages: eng
- target_languages: urd
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urd/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'ur']
- src_constituents: {'eng'}
- tgt_constituents: {'urd'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: urd
- short_pair: en-ur
- chrF2_score: 0.39
- bleu: 12.1
- brevity_penalty: 1.0
- ref_len: 12155.0
- src_name: English
- tgt_name: Urdu
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: ur
- prefer_old: False
- long_pair: eng-urd
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "ur"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-ur | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"ur",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ur"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #ur #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-urd
* source group: English
* target group: Urdu
* OPUS readme: eng-urd
* model: transformer-align
* source language(s): eng
* target language(s): urd
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 12.1, chr-F: 0.390
### System Info:
* hf\_name: eng-urd
* source\_languages: eng
* target\_languages: urd
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'ur']
* src\_constituents: {'eng'}
* tgt\_constituents: {'urd'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: urd
* short\_pair: en-ur
* chrF2\_score: 0.39
* bleu: 12.1
* brevity\_penalty: 1.0
* ref\_len: 12155.0
* src\_name: English
* tgt\_name: Urdu
* train\_date: 2020-06-17
* src\_alpha2: en
* tgt\_alpha2: ur
* prefer\_old: False
* long\_pair: eng-urd
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-urd\n\n\n* source group: English\n* target group: Urdu\n* OPUS readme: eng-urd\n* model: transformer-align\n* source language(s): eng\n* target language(s): urd\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.1, chr-F: 0.390",
"### System Info:\n\n\n* hf\\_name: eng-urd\n* source\\_languages: eng\n* target\\_languages: urd\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ur']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'urd'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: urd\n* short\\_pair: en-ur\n* chrF2\\_score: 0.39\n* bleu: 12.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 12155.0\n* src\\_name: English\n* tgt\\_name: Urdu\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: ur\n* prefer\\_old: False\n* long\\_pair: eng-urd\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ur #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-urd\n\n\n* source group: English\n* target group: Urdu\n* OPUS readme: eng-urd\n* model: transformer-align\n* source language(s): eng\n* target language(s): urd\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.1, chr-F: 0.390",
"### System Info:\n\n\n* hf\\_name: eng-urd\n* source\\_languages: eng\n* target\\_languages: urd\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ur']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'urd'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: urd\n* short\\_pair: en-ur\n* chrF2\\_score: 0.39\n* bleu: 12.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 12155.0\n* src\\_name: English\n* tgt\\_name: Urdu\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: ur\n* prefer\\_old: False\n* long\\_pair: eng-urd\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
133,
394
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #ur #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-urd\n\n\n* source group: English\n* target group: Urdu\n* OPUS readme: eng-urd\n* model: transformer-align\n* source language(s): eng\n* target language(s): urd\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.1, chr-F: 0.390### System Info:\n\n\n* hf\\_name: eng-urd\n* source\\_languages: eng\n* target\\_languages: urd\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'ur']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'urd'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: urd\n* short\\_pair: en-ur\n* chrF2\\_score: 0.39\n* bleu: 12.1\n* brevity\\_penalty: 1.0\n* ref\\_len: 12155.0\n* src\\_name: English\n* tgt\\_name: Urdu\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: ur\n* prefer\\_old: False\n* long\\_pair: eng-urd\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-urj
* source group: English
* target group: Uralic languages
* OPUS readme: [eng-urj](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urj/README.md)
* model: transformer
* source language(s): eng
* target language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2015-enfi-engfin.eng.fin | 18.3 | 0.519 |
| newsdev2018-enet-engest.eng.est | 19.3 | 0.520 |
| newssyscomb2009-enghun.eng.hun | 15.4 | 0.471 |
| newstest2009-enghun.eng.hun | 15.7 | 0.468 |
| newstest2015-enfi-engfin.eng.fin | 20.2 | 0.534 |
| newstest2016-enfi-engfin.eng.fin | 20.7 | 0.541 |
| newstest2017-enfi-engfin.eng.fin | 23.6 | 0.566 |
| newstest2018-enet-engest.eng.est | 20.8 | 0.535 |
| newstest2018-enfi-engfin.eng.fin | 15.8 | 0.499 |
| newstest2019-enfi-engfin.eng.fin | 19.9 | 0.518 |
| newstestB2016-enfi-engfin.eng.fin | 16.6 | 0.509 |
| newstestB2017-enfi-engfin.eng.fin | 19.4 | 0.529 |
| Tatoeba-test.eng-chm.eng.chm | 1.3 | 0.127 |
| Tatoeba-test.eng-est.eng.est | 51.0 | 0.692 |
| Tatoeba-test.eng-fin.eng.fin | 34.6 | 0.597 |
| Tatoeba-test.eng-fkv.eng.fkv | 2.2 | 0.302 |
| Tatoeba-test.eng-hun.eng.hun | 35.6 | 0.591 |
| Tatoeba-test.eng-izh.eng.izh | 5.7 | 0.211 |
| Tatoeba-test.eng-kom.eng.kom | 3.0 | 0.012 |
| Tatoeba-test.eng-krl.eng.krl | 8.5 | 0.230 |
| Tatoeba-test.eng-liv.eng.liv | 2.7 | 0.077 |
| Tatoeba-test.eng-mdf.eng.mdf | 2.8 | 0.007 |
| Tatoeba-test.eng.multi | 35.1 | 0.588 |
| Tatoeba-test.eng-myv.eng.myv | 1.3 | 0.014 |
| Tatoeba-test.eng-sma.eng.sma | 1.8 | 0.095 |
| Tatoeba-test.eng-sme.eng.sme | 6.8 | 0.204 |
| Tatoeba-test.eng-udm.eng.udm | 1.1 | 0.121 |
### System Info:
- hf_name: eng-urj
- source_languages: eng
- target_languages: urj
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urj/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'se', 'fi', 'hu', 'et', 'urj']
- src_constituents: {'eng'}
- tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urj/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: urj
- short_pair: en-urj
- chrF2_score: 0.588
- bleu: 35.1
- brevity_penalty: 0.943
- ref_len: 59664.0
- src_name: English
- tgt_name: Uralic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: urj
- prefer_old: False
- long_pair: eng-urj
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "se", "fi", "hu", "et", "urj"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-urj | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"se",
"fi",
"hu",
"et",
"urj",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"se",
"fi",
"hu",
"et",
"urj"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #se #fi #hu #et #urj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-urj
* source group: English
* target group: Uralic languages
* OPUS readme: eng-urj
* model: transformer
* source language(s): eng
* target language(s): est fin fkv\_Latn hun izh kpv krl liv\_Latn mdf mhr myv sma sme udm vro
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 18.3, chr-F: 0.519
testset: URL, BLEU: 19.3, chr-F: 0.520
testset: URL, BLEU: 15.4, chr-F: 0.471
testset: URL, BLEU: 15.7, chr-F: 0.468
testset: URL, BLEU: 20.2, chr-F: 0.534
testset: URL, BLEU: 20.7, chr-F: 0.541
testset: URL, BLEU: 23.6, chr-F: 0.566
testset: URL, BLEU: 20.8, chr-F: 0.535
testset: URL, BLEU: 15.8, chr-F: 0.499
testset: URL, BLEU: 19.9, chr-F: 0.518
testset: URL, BLEU: 16.6, chr-F: 0.509
testset: URL, BLEU: 19.4, chr-F: 0.529
testset: URL, BLEU: 1.3, chr-F: 0.127
testset: URL, BLEU: 51.0, chr-F: 0.692
testset: URL, BLEU: 34.6, chr-F: 0.597
testset: URL, BLEU: 2.2, chr-F: 0.302
testset: URL, BLEU: 35.6, chr-F: 0.591
testset: URL, BLEU: 5.7, chr-F: 0.211
testset: URL, BLEU: 3.0, chr-F: 0.012
testset: URL, BLEU: 8.5, chr-F: 0.230
testset: URL, BLEU: 2.7, chr-F: 0.077
testset: URL, BLEU: 2.8, chr-F: 0.007
testset: URL, BLEU: 35.1, chr-F: 0.588
testset: URL, BLEU: 1.3, chr-F: 0.014
testset: URL, BLEU: 1.8, chr-F: 0.095
testset: URL, BLEU: 6.8, chr-F: 0.204
testset: URL, BLEU: 1.1, chr-F: 0.121
### System Info:
* hf\_name: eng-urj
* source\_languages: eng
* target\_languages: urj
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'se', 'fi', 'hu', 'et', 'urj']
* src\_constituents: {'eng'}
* tgt\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\_Latn', 'est', 'mhr', 'sma'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: urj
* short\_pair: en-urj
* chrF2\_score: 0.588
* bleu: 35.1
* brevity\_penalty: 0.943
* ref\_len: 59664.0
* src\_name: English
* tgt\_name: Uralic languages
* train\_date: 2020-08-02
* src\_alpha2: en
* tgt\_alpha2: urj
* prefer\_old: False
* long\_pair: eng-urj
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-urj\n\n\n* source group: English\n* target group: Uralic languages\n* OPUS readme: eng-urj\n* model: transformer\n* source language(s): eng\n* target language(s): est fin fkv\\_Latn hun izh kpv krl liv\\_Latn mdf mhr myv sma sme udm vro\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.3, chr-F: 0.519\ntestset: URL, BLEU: 19.3, chr-F: 0.520\ntestset: URL, BLEU: 15.4, chr-F: 0.471\ntestset: URL, BLEU: 15.7, chr-F: 0.468\ntestset: URL, BLEU: 20.2, chr-F: 0.534\ntestset: URL, BLEU: 20.7, chr-F: 0.541\ntestset: URL, BLEU: 23.6, chr-F: 0.566\ntestset: URL, BLEU: 20.8, chr-F: 0.535\ntestset: URL, BLEU: 15.8, chr-F: 0.499\ntestset: URL, BLEU: 19.9, chr-F: 0.518\ntestset: URL, BLEU: 16.6, chr-F: 0.509\ntestset: URL, BLEU: 19.4, chr-F: 0.529\ntestset: URL, BLEU: 1.3, chr-F: 0.127\ntestset: URL, BLEU: 51.0, chr-F: 0.692\ntestset: URL, BLEU: 34.6, chr-F: 0.597\ntestset: URL, BLEU: 2.2, chr-F: 0.302\ntestset: URL, BLEU: 35.6, chr-F: 0.591\ntestset: URL, BLEU: 5.7, chr-F: 0.211\ntestset: URL, BLEU: 3.0, chr-F: 0.012\ntestset: URL, BLEU: 8.5, chr-F: 0.230\ntestset: URL, BLEU: 2.7, chr-F: 0.077\ntestset: URL, BLEU: 2.8, chr-F: 0.007\ntestset: URL, BLEU: 35.1, chr-F: 0.588\ntestset: URL, BLEU: 1.3, chr-F: 0.014\ntestset: URL, BLEU: 1.8, chr-F: 0.095\ntestset: URL, BLEU: 6.8, chr-F: 0.204\ntestset: URL, BLEU: 1.1, chr-F: 0.121",
"### System Info:\n\n\n* hf\\_name: eng-urj\n* source\\_languages: eng\n* target\\_languages: urj\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'se', 'fi', 'hu', 'et', 'urj']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: urj\n* short\\_pair: en-urj\n* chrF2\\_score: 0.588\n* bleu: 35.1\n* brevity\\_penalty: 0.943\n* ref\\_len: 59664.0\n* src\\_name: English\n* tgt\\_name: Uralic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: urj\n* prefer\\_old: False\n* long\\_pair: eng-urj\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #se #fi #hu #et #urj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-urj\n\n\n* source group: English\n* target group: Uralic languages\n* OPUS readme: eng-urj\n* model: transformer\n* source language(s): eng\n* target language(s): est fin fkv\\_Latn hun izh kpv krl liv\\_Latn mdf mhr myv sma sme udm vro\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.3, chr-F: 0.519\ntestset: URL, BLEU: 19.3, chr-F: 0.520\ntestset: URL, BLEU: 15.4, chr-F: 0.471\ntestset: URL, BLEU: 15.7, chr-F: 0.468\ntestset: URL, BLEU: 20.2, chr-F: 0.534\ntestset: URL, BLEU: 20.7, chr-F: 0.541\ntestset: URL, BLEU: 23.6, chr-F: 0.566\ntestset: URL, BLEU: 20.8, chr-F: 0.535\ntestset: URL, BLEU: 15.8, chr-F: 0.499\ntestset: URL, BLEU: 19.9, chr-F: 0.518\ntestset: URL, BLEU: 16.6, chr-F: 0.509\ntestset: URL, BLEU: 19.4, chr-F: 0.529\ntestset: URL, BLEU: 1.3, chr-F: 0.127\ntestset: URL, BLEU: 51.0, chr-F: 0.692\ntestset: URL, BLEU: 34.6, chr-F: 0.597\ntestset: URL, BLEU: 2.2, chr-F: 0.302\ntestset: URL, BLEU: 35.6, chr-F: 0.591\ntestset: URL, BLEU: 5.7, chr-F: 0.211\ntestset: URL, BLEU: 3.0, chr-F: 0.012\ntestset: URL, BLEU: 8.5, chr-F: 0.230\ntestset: URL, BLEU: 2.7, chr-F: 0.077\ntestset: URL, BLEU: 2.8, chr-F: 0.007\ntestset: URL, BLEU: 35.1, chr-F: 0.588\ntestset: URL, BLEU: 1.3, chr-F: 0.014\ntestset: URL, BLEU: 1.8, chr-F: 0.095\ntestset: URL, BLEU: 6.8, chr-F: 0.204\ntestset: URL, BLEU: 1.1, chr-F: 0.121",
"### System Info:\n\n\n* hf\\_name: eng-urj\n* source\\_languages: eng\n* target\\_languages: urj\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'se', 'fi', 'hu', 'et', 'urj']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: urj\n* short\\_pair: en-urj\n* chrF2\\_score: 0.588\n* bleu: 35.1\n* brevity\\_penalty: 0.943\n* ref\\_len: 59664.0\n* src\\_name: English\n* tgt\\_name: Uralic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: urj\n* prefer\\_old: False\n* long\\_pair: eng-urj\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
60,
786,
501
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #se #fi #hu #et #urj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-urj\n\n\n* source group: English\n* target group: Uralic languages\n* OPUS readme: eng-urj\n* model: transformer\n* source language(s): eng\n* target language(s): est fin fkv\\_Latn hun izh kpv krl liv\\_Latn mdf mhr myv sma sme udm vro\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.3, chr-F: 0.519\ntestset: URL, BLEU: 19.3, chr-F: 0.520\ntestset: URL, BLEU: 15.4, chr-F: 0.471\ntestset: URL, BLEU: 15.7, chr-F: 0.468\ntestset: URL, BLEU: 20.2, chr-F: 0.534\ntestset: URL, BLEU: 20.7, chr-F: 0.541\ntestset: URL, BLEU: 23.6, chr-F: 0.566\ntestset: URL, BLEU: 20.8, chr-F: 0.535\ntestset: URL, BLEU: 15.8, chr-F: 0.499\ntestset: URL, BLEU: 19.9, chr-F: 0.518\ntestset: URL, BLEU: 16.6, chr-F: 0.509\ntestset: URL, BLEU: 19.4, chr-F: 0.529\ntestset: URL, BLEU: 1.3, chr-F: 0.127\ntestset: URL, BLEU: 51.0, chr-F: 0.692\ntestset: URL, BLEU: 34.6, chr-F: 0.597\ntestset: URL, BLEU: 2.2, chr-F: 0.302\ntestset: URL, BLEU: 35.6, chr-F: 0.591\ntestset: URL, BLEU: 5.7, chr-F: 0.211\ntestset: URL, BLEU: 3.0, chr-F: 0.012\ntestset: URL, BLEU: 8.5, chr-F: 0.230\ntestset: URL, BLEU: 2.7, chr-F: 0.077\ntestset: URL, BLEU: 2.8, chr-F: 0.007\ntestset: URL, BLEU: 35.1, chr-F: 0.588\ntestset: URL, BLEU: 1.3, chr-F: 0.014\ntestset: URL, BLEU: 1.8, chr-F: 0.095\ntestset: URL, BLEU: 6.8, chr-F: 0.204\ntestset: URL, BLEU: 1.1, chr-F: 0.121### System Info:\n\n\n* hf\\_name: eng-urj\n* source\\_languages: eng\n* target\\_languages: urj\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'se', 'fi', 'hu', 'et', 'urj']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: urj\n* short\\_pair: en-urj\n* chrF2\\_score: 0.588\n* bleu: 35.1\n* brevity\\_penalty: 0.943\n* ref\\_len: 59664.0\n* src\\_name: English\n* tgt\\_name: Uralic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: urj\n* prefer\\_old: False\n* long\\_pair: eng-urj\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-vie
* source group: English
* target group: Vietnamese
* OPUS readme: [eng-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-vie/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): vie vie_Hani
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.vie | 37.2 | 0.542 |
### System Info:
- hf_name: eng-vie
- source_languages: eng
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'vi']
- src_constituents: {'eng'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-vie/opus-2020-06-17.test.txt
- src_alpha3: eng
- tgt_alpha3: vie
- short_pair: en-vi
- chrF2_score: 0.542
- bleu: 37.2
- brevity_penalty: 0.973
- ref_len: 24427.0
- src_name: English
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: en
- tgt_alpha2: vi
- prefer_old: False
- long_pair: eng-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "vi"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-vi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"vi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"vi"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-vie
* source group: English
* target group: Vietnamese
* OPUS readme: eng-vie
* model: transformer-align
* source language(s): eng
* target language(s): vie vie\_Hani
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.2, chr-F: 0.542
### System Info:
* hf\_name: eng-vie
* source\_languages: eng
* target\_languages: vie
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'vi']
* src\_constituents: {'eng'}
* tgt\_constituents: {'vie', 'vie\_Hani'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: vie
* short\_pair: en-vi
* chrF2\_score: 0.542
* bleu: 37.2
* brevity\_penalty: 0.973
* ref\_len: 24427.0
* src\_name: English
* tgt\_name: Vietnamese
* train\_date: 2020-06-17
* src\_alpha2: en
* tgt\_alpha2: vi
* prefer\_old: False
* long\_pair: eng-vie
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-vie\n\n\n* source group: English\n* target group: Vietnamese\n* OPUS readme: eng-vie\n* model: transformer-align\n* source language(s): eng\n* target language(s): vie vie\\_Hani\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.2, chr-F: 0.542",
"### System Info:\n\n\n* hf\\_name: eng-vie\n* source\\_languages: eng\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'vi']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: vie\n* short\\_pair: en-vi\n* chrF2\\_score: 0.542\n* bleu: 37.2\n* brevity\\_penalty: 0.973\n* ref\\_len: 24427.0\n* src\\_name: English\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: eng-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-vie\n\n\n* source group: English\n* target group: Vietnamese\n* OPUS readme: eng-vie\n* model: transformer-align\n* source language(s): eng\n* target language(s): vie vie\\_Hani\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.2, chr-F: 0.542",
"### System Info:\n\n\n* hf\\_name: eng-vie\n* source\\_languages: eng\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'vi']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: vie\n* short\\_pair: en-vi\n* chrF2\\_score: 0.542\n* bleu: 37.2\n* brevity\\_penalty: 0.973\n* ref\\_len: 24427.0\n* src\\_name: English\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: eng-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
163,
399
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-vie\n\n\n* source group: English\n* target group: Vietnamese\n* OPUS readme: eng-vie\n* model: transformer-align\n* source language(s): eng\n* target language(s): vie vie\\_Hani\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.2, chr-F: 0.542### System Info:\n\n\n* hf\\_name: eng-vie\n* source\\_languages: eng\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'vi']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: vie\n* short\\_pair: en-vi\n* chrF2\\_score: 0.542\n* bleu: 37.2\n* brevity\\_penalty: 0.973\n* ref\\_len: 24427.0\n* src\\_name: English\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: en\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: eng-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en-xh
* source languages: en
* target languages: xh
* OPUS readme: [en-xh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-xh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-xh/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-xh/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-xh/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.xh | 37.9 | 0.652 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-xh | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"xh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #xh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en-xh
* source languages: en
* target languages: xh
* OPUS readme: en-xh
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.9, chr-F: 0.652
| [
"### opus-mt-en-xh\n\n\n* source languages: en\n* target languages: xh\n* OPUS readme: en-xh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.9, chr-F: 0.652"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #xh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en-xh\n\n\n* source languages: en\n* target languages: xh\n* OPUS readme: en-xh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.9, chr-F: 0.652"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #xh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en-xh\n\n\n* source languages: en\n* target languages: xh\n* OPUS readme: en-xh\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.9, chr-F: 0.652"
] |
translation | transformers |
### eng-zho
* source group: English
* target group: Chinese
* OPUS readme: [eng-zho](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md)
* model: transformer
* source language(s): eng
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip)
* test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt)
* test set scores: [opus-2020-07-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.zho | 31.4 | 0.268 |
### System Info:
- hf_name: eng-zho
- source_languages: eng
- target_languages: zho
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zho/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'zh']
- src_constituents: {'eng'}
- tgt_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zho/opus-2020-07-17.test.txt
- src_alpha3: eng
- tgt_alpha3: zho
- short_pair: en-zh
- chrF2_score: 0.268
- bleu: 31.4
- brevity_penalty: 0.8959999999999999
- ref_len: 110468.0
- src_name: English
- tgt_name: Chinese
- train_date: 2020-07-17
- src_alpha2: en
- tgt_alpha2: zh
- prefer_old: False
- long_pair: eng-zho
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
| {"language": ["en", "zh"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-zh | null | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"zh"
] | TAGS
#transformers #pytorch #tf #jax #rust #marian #text2text-generation #translation #en #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-zho
* source group: English
* target group: Chinese
* OPUS readme: eng-zho
* model: transformer
* source language(s): eng
* target language(s): cjy\_Hans cjy\_Hant cmn cmn\_Hans cmn\_Hant gan lzh lzh\_Hans nan wuu yue yue\_Hans yue\_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.4, chr-F: 0.268
### System Info:
* hf\_name: eng-zho
* source\_languages: eng
* target\_languages: zho
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'zh']
* src\_constituents: {'eng'}
* tgt\_constituents: {'cmn\_Hans', 'nan', 'nan\_Hani', 'gan', 'yue', 'cmn\_Kana', 'yue\_Hani', 'wuu\_Bopo', 'cmn\_Latn', 'yue\_Hira', 'cmn\_Hani', 'cjy\_Hans', 'cmn', 'lzh\_Hang', 'lzh\_Hira', 'cmn\_Hant', 'lzh\_Bopo', 'zho', 'zho\_Hans', 'zho\_Hant', 'lzh\_Hani', 'yue\_Hang', 'wuu', 'yue\_Kana', 'wuu\_Latn', 'yue\_Bopo', 'cjy\_Hant', 'yue\_Hans', 'lzh', 'cmn\_Hira', 'lzh\_Yiii', 'lzh\_Hans', 'cmn\_Bopo', 'cmn\_Hang', 'hak\_Hani', 'cmn\_Yiii', 'yue\_Hant', 'lzh\_Kana', 'wuu\_Hani'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: zho
* short\_pair: en-zh
* chrF2\_score: 0.268
* bleu: 31.4
* brevity\_penalty: 0.8959999999999999
* ref\_len: 110468.0
* src\_name: English
* tgt\_name: Chinese
* train\_date: 2020-07-17
* src\_alpha2: en
* tgt\_alpha2: zh
* prefer\_old: False
* long\_pair: eng-zho
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-zho\n\n\n* source group: English\n* target group: Chinese\n* OPUS readme: eng-zho\n* model: transformer\n* source language(s): eng\n* target language(s): cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant gan lzh lzh\\_Hans nan wuu yue yue\\_Hans yue\\_Hant\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.268",
"### System Info:\n\n\n* hf\\_name: eng-zho\n* source\\_languages: eng\n* target\\_languages: zho\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'zh']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: zho\n* short\\_pair: en-zh\n* chrF2\\_score: 0.268\n* bleu: 31.4\n* brevity\\_penalty: 0.8959999999999999\n* ref\\_len: 110468.0\n* src\\_name: English\n* tgt\\_name: Chinese\n* train\\_date: 2020-07-17\n* src\\_alpha2: en\n* tgt\\_alpha2: zh\n* prefer\\_old: False\n* long\\_pair: eng-zho\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #marian #text2text-generation #translation #en #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-zho\n\n\n* source group: English\n* target group: Chinese\n* OPUS readme: eng-zho\n* model: transformer\n* source language(s): eng\n* target language(s): cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant gan lzh lzh\\_Hans nan wuu yue yue\\_Hans yue\\_Hant\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.268",
"### System Info:\n\n\n* hf\\_name: eng-zho\n* source\\_languages: eng\n* target\\_languages: zho\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'zh']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: zho\n* short\\_pair: en-zh\n* chrF2\\_score: 0.268\n* bleu: 31.4\n* brevity\\_penalty: 0.8959999999999999\n* ref\\_len: 110468.0\n* src\\_name: English\n* tgt\\_name: Chinese\n* train\\_date: 2020-07-17\n* src\\_alpha2: en\n* tgt\\_alpha2: zh\n* prefer\\_old: False\n* long\\_pair: eng-zho\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
56,
201,
714
] | [
"TAGS\n#transformers #pytorch #tf #jax #rust #marian #text2text-generation #translation #en #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-zho\n\n\n* source group: English\n* target group: Chinese\n* OPUS readme: eng-zho\n* model: transformer\n* source language(s): eng\n* target language(s): cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant gan lzh lzh\\_Hans nan wuu yue yue\\_Hans yue\\_Hant\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.4, chr-F: 0.268### System Info:\n\n\n* hf\\_name: eng-zho\n* source\\_languages: eng\n* target\\_languages: zho\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'zh']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'cmn\\_Hans', 'nan', 'nan\\_Hani', 'gan', 'yue', 'cmn\\_Kana', 'yue\\_Hani', 'wuu\\_Bopo', 'cmn\\_Latn', 'yue\\_Hira', 'cmn\\_Hani', 'cjy\\_Hans', 'cmn', 'lzh\\_Hang', 'lzh\\_Hira', 'cmn\\_Hant', 'lzh\\_Bopo', 'zho', 'zho\\_Hans', 'zho\\_Hant', 'lzh\\_Hani', 'yue\\_Hang', 'wuu', 'yue\\_Kana', 'wuu\\_Latn', 'yue\\_Bopo', 'cjy\\_Hant', 'yue\\_Hans', 'lzh', 'cmn\\_Hira', 'lzh\\_Yiii', 'lzh\\_Hans', 'cmn\\_Bopo', 'cmn\\_Hang', 'hak\\_Hani', 'cmn\\_Yiii', 'yue\\_Hant', 'lzh\\_Kana', 'wuu\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: zho\n* short\\_pair: en-zh\n* chrF2\\_score: 0.268\n* bleu: 31.4\n* brevity\\_penalty: 0.8959999999999999\n* ref\\_len: 110468.0\n* src\\_name: English\n* tgt\\_name: Chinese\n* train\\_date: 2020-07-17\n* src\\_alpha2: en\n* tgt\\_alpha2: zh\n* prefer\\_old: False\n* long\\_pair: eng-zho\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-zle
* source group: English
* target group: East Slavic languages
* OPUS readme: [eng-zle](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zle/README.md)
* model: transformer
* source language(s): eng
* target language(s): bel bel_Latn orv_Cyrl rue rus ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012-engrus.eng.rus | 27.4 | 0.550 |
| newstest2013-engrus.eng.rus | 21.4 | 0.493 |
| newstest2015-enru-engrus.eng.rus | 24.2 | 0.534 |
| newstest2016-enru-engrus.eng.rus | 23.3 | 0.518 |
| newstest2017-enru-engrus.eng.rus | 25.3 | 0.541 |
| newstest2018-enru-engrus.eng.rus | 22.4 | 0.527 |
| newstest2019-enru-engrus.eng.rus | 24.1 | 0.505 |
| Tatoeba-test.eng-bel.eng.bel | 20.8 | 0.471 |
| Tatoeba-test.eng.multi | 37.2 | 0.580 |
| Tatoeba-test.eng-orv.eng.orv | 0.6 | 0.130 |
| Tatoeba-test.eng-rue.eng.rue | 1.4 | 0.168 |
| Tatoeba-test.eng-rus.eng.rus | 41.3 | 0.616 |
| Tatoeba-test.eng-ukr.eng.ukr | 38.7 | 0.596 |
### System Info:
- hf_name: eng-zle
- source_languages: eng
- target_languages: zle
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zle/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'be', 'ru', 'uk', 'zle']
- src_constituents: {'eng'}
- tgt_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zle/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: zle
- short_pair: en-zle
- chrF2_score: 0.58
- bleu: 37.2
- brevity_penalty: 0.9890000000000001
- ref_len: 63493.0
- src_name: English
- tgt_name: East Slavic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: zle
- prefer_old: False
- long_pair: eng-zle
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "be", "ru", "uk", "zle"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-zle | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"be",
"ru",
"uk",
"zle",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"be",
"ru",
"uk",
"zle"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #be #ru #uk #zle #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-zle
* source group: English
* target group: East Slavic languages
* OPUS readme: eng-zle
* model: transformer
* source language(s): eng
* target language(s): bel bel\_Latn orv\_Cyrl rue rus ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 27.4, chr-F: 0.550
testset: URL, BLEU: 21.4, chr-F: 0.493
testset: URL, BLEU: 24.2, chr-F: 0.534
testset: URL, BLEU: 23.3, chr-F: 0.518
testset: URL, BLEU: 25.3, chr-F: 0.541
testset: URL, BLEU: 22.4, chr-F: 0.527
testset: URL, BLEU: 24.1, chr-F: 0.505
testset: URL, BLEU: 20.8, chr-F: 0.471
testset: URL, BLEU: 37.2, chr-F: 0.580
testset: URL, BLEU: 0.6, chr-F: 0.130
testset: URL, BLEU: 1.4, chr-F: 0.168
testset: URL, BLEU: 41.3, chr-F: 0.616
testset: URL, BLEU: 38.7, chr-F: 0.596
### System Info:
* hf\_name: eng-zle
* source\_languages: eng
* target\_languages: zle
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'be', 'ru', 'uk', 'zle']
* src\_constituents: {'eng'}
* tgt\_constituents: {'bel', 'orv\_Cyrl', 'bel\_Latn', 'rus', 'ukr', 'rue'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: zle
* short\_pair: en-zle
* chrF2\_score: 0.58
* bleu: 37.2
* brevity\_penalty: 0.9890000000000001
* ref\_len: 63493.0
* src\_name: English
* tgt\_name: East Slavic languages
* train\_date: 2020-08-02
* src\_alpha2: en
* tgt\_alpha2: zle
* prefer\_old: False
* long\_pair: eng-zle
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-zle\n\n\n* source group: English\n* target group: East Slavic languages\n* OPUS readme: eng-zle\n* model: transformer\n* source language(s): eng\n* target language(s): bel bel\\_Latn orv\\_Cyrl rue rus ukr\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.4, chr-F: 0.550\ntestset: URL, BLEU: 21.4, chr-F: 0.493\ntestset: URL, BLEU: 24.2, chr-F: 0.534\ntestset: URL, BLEU: 23.3, chr-F: 0.518\ntestset: URL, BLEU: 25.3, chr-F: 0.541\ntestset: URL, BLEU: 22.4, chr-F: 0.527\ntestset: URL, BLEU: 24.1, chr-F: 0.505\ntestset: URL, BLEU: 20.8, chr-F: 0.471\ntestset: URL, BLEU: 37.2, chr-F: 0.580\ntestset: URL, BLEU: 0.6, chr-F: 0.130\ntestset: URL, BLEU: 1.4, chr-F: 0.168\ntestset: URL, BLEU: 41.3, chr-F: 0.616\ntestset: URL, BLEU: 38.7, chr-F: 0.596",
"### System Info:\n\n\n* hf\\_name: eng-zle\n* source\\_languages: eng\n* target\\_languages: zle\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'be', 'ru', 'uk', 'zle']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'bel', 'orv\\_Cyrl', 'bel\\_Latn', 'rus', 'ukr', 'rue'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: zle\n* short\\_pair: en-zle\n* chrF2\\_score: 0.58\n* bleu: 37.2\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 63493.0\n* src\\_name: English\n* tgt\\_name: East Slavic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: zle\n* prefer\\_old: False\n* long\\_pair: eng-zle\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #be #ru #uk #zle #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-zle\n\n\n* source group: English\n* target group: East Slavic languages\n* OPUS readme: eng-zle\n* model: transformer\n* source language(s): eng\n* target language(s): bel bel\\_Latn orv\\_Cyrl rue rus ukr\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.4, chr-F: 0.550\ntestset: URL, BLEU: 21.4, chr-F: 0.493\ntestset: URL, BLEU: 24.2, chr-F: 0.534\ntestset: URL, BLEU: 23.3, chr-F: 0.518\ntestset: URL, BLEU: 25.3, chr-F: 0.541\ntestset: URL, BLEU: 22.4, chr-F: 0.527\ntestset: URL, BLEU: 24.1, chr-F: 0.505\ntestset: URL, BLEU: 20.8, chr-F: 0.471\ntestset: URL, BLEU: 37.2, chr-F: 0.580\ntestset: URL, BLEU: 0.6, chr-F: 0.130\ntestset: URL, BLEU: 1.4, chr-F: 0.168\ntestset: URL, BLEU: 41.3, chr-F: 0.616\ntestset: URL, BLEU: 38.7, chr-F: 0.596",
"### System Info:\n\n\n* hf\\_name: eng-zle\n* source\\_languages: eng\n* target\\_languages: zle\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'be', 'ru', 'uk', 'zle']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'bel', 'orv\\_Cyrl', 'bel\\_Latn', 'rus', 'ukr', 'rue'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: zle\n* short\\_pair: en-zle\n* chrF2\\_score: 0.58\n* bleu: 37.2\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 63493.0\n* src\\_name: English\n* tgt\\_name: East Slavic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: zle\n* prefer\\_old: False\n* long\\_pair: eng-zle\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
58,
445,
449
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #be #ru #uk #zle #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-zle\n\n\n* source group: English\n* target group: East Slavic languages\n* OPUS readme: eng-zle\n* model: transformer\n* source language(s): eng\n* target language(s): bel bel\\_Latn orv\\_Cyrl rue rus ukr\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.4, chr-F: 0.550\ntestset: URL, BLEU: 21.4, chr-F: 0.493\ntestset: URL, BLEU: 24.2, chr-F: 0.534\ntestset: URL, BLEU: 23.3, chr-F: 0.518\ntestset: URL, BLEU: 25.3, chr-F: 0.541\ntestset: URL, BLEU: 22.4, chr-F: 0.527\ntestset: URL, BLEU: 24.1, chr-F: 0.505\ntestset: URL, BLEU: 20.8, chr-F: 0.471\ntestset: URL, BLEU: 37.2, chr-F: 0.580\ntestset: URL, BLEU: 0.6, chr-F: 0.130\ntestset: URL, BLEU: 1.4, chr-F: 0.168\ntestset: URL, BLEU: 41.3, chr-F: 0.616\ntestset: URL, BLEU: 38.7, chr-F: 0.596### System Info:\n\n\n* hf\\_name: eng-zle\n* source\\_languages: eng\n* target\\_languages: zle\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'be', 'ru', 'uk', 'zle']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'bel', 'orv\\_Cyrl', 'bel\\_Latn', 'rus', 'ukr', 'rue'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: zle\n* short\\_pair: en-zle\n* chrF2\\_score: 0.58\n* bleu: 37.2\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 63493.0\n* src\\_name: English\n* tgt\\_name: East Slavic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: zle\n* prefer\\_old: False\n* long\\_pair: eng-zle\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-zls
* source group: English
* target group: South Slavic languages
* OPUS readme: [eng-zls](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zls/README.md)
* model: transformer
* source language(s): eng
* target language(s): bos_Latn bul bul_Latn hrv mkd slv srp_Cyrl srp_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-bul.eng.bul | 47.6 | 0.657 |
| Tatoeba-test.eng-hbs.eng.hbs | 40.7 | 0.619 |
| Tatoeba-test.eng-mkd.eng.mkd | 45.2 | 0.642 |
| Tatoeba-test.eng.multi | 42.7 | 0.622 |
| Tatoeba-test.eng-slv.eng.slv | 17.9 | 0.351 |
### System Info:
- hf_name: eng-zls
- source_languages: eng
- target_languages: zls
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zls/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'hr', 'mk', 'bg', 'sl', 'zls']
- src_constituents: {'eng'}
- tgt_constituents: {'hrv', 'mkd', 'srp_Latn', 'srp_Cyrl', 'bul_Latn', 'bul', 'bos_Latn', 'slv'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zls/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: zls
- short_pair: en-zls
- chrF2_score: 0.622
- bleu: 42.7
- brevity_penalty: 0.9690000000000001
- ref_len: 64788.0
- src_name: English
- tgt_name: South Slavic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: zls
- prefer_old: False
- long_pair: eng-zls
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "hr", "mk", "bg", "sl", "zls"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-zls | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"hr",
"mk",
"bg",
"sl",
"zls",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"hr",
"mk",
"bg",
"sl",
"zls"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #hr #mk #bg #sl #zls #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-zls
* source group: English
* target group: South Slavic languages
* OPUS readme: eng-zls
* model: transformer
* source language(s): eng
* target language(s): bos\_Latn bul bul\_Latn hrv mkd slv srp\_Cyrl srp\_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 47.6, chr-F: 0.657
testset: URL, BLEU: 40.7, chr-F: 0.619
testset: URL, BLEU: 45.2, chr-F: 0.642
testset: URL, BLEU: 42.7, chr-F: 0.622
testset: URL, BLEU: 17.9, chr-F: 0.351
### System Info:
* hf\_name: eng-zls
* source\_languages: eng
* target\_languages: zls
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'hr', 'mk', 'bg', 'sl', 'zls']
* src\_constituents: {'eng'}
* tgt\_constituents: {'hrv', 'mkd', 'srp\_Latn', 'srp\_Cyrl', 'bul\_Latn', 'bul', 'bos\_Latn', 'slv'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: zls
* short\_pair: en-zls
* chrF2\_score: 0.622
* bleu: 42.7
* brevity\_penalty: 0.9690000000000001
* ref\_len: 64788.0
* src\_name: English
* tgt\_name: South Slavic languages
* train\_date: 2020-08-02
* src\_alpha2: en
* tgt\_alpha2: zls
* prefer\_old: False
* long\_pair: eng-zls
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-zls\n\n\n* source group: English\n* target group: South Slavic languages\n* OPUS readme: eng-zls\n* model: transformer\n* source language(s): eng\n* target language(s): bos\\_Latn bul bul\\_Latn hrv mkd slv srp\\_Cyrl srp\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.6, chr-F: 0.657\ntestset: URL, BLEU: 40.7, chr-F: 0.619\ntestset: URL, BLEU: 45.2, chr-F: 0.642\ntestset: URL, BLEU: 42.7, chr-F: 0.622\ntestset: URL, BLEU: 17.9, chr-F: 0.351",
"### System Info:\n\n\n* hf\\_name: eng-zls\n* source\\_languages: eng\n* target\\_languages: zls\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'hr', 'mk', 'bg', 'sl', 'zls']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'hrv', 'mkd', 'srp\\_Latn', 'srp\\_Cyrl', 'bul\\_Latn', 'bul', 'bos\\_Latn', 'slv'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: zls\n* short\\_pair: en-zls\n* chrF2\\_score: 0.622\n* bleu: 42.7\n* brevity\\_penalty: 0.9690000000000001\n* ref\\_len: 64788.0\n* src\\_name: English\n* tgt\\_name: South Slavic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: zls\n* prefer\\_old: False\n* long\\_pair: eng-zls\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #hr #mk #bg #sl #zls #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-zls\n\n\n* source group: English\n* target group: South Slavic languages\n* OPUS readme: eng-zls\n* model: transformer\n* source language(s): eng\n* target language(s): bos\\_Latn bul bul\\_Latn hrv mkd slv srp\\_Cyrl srp\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.6, chr-F: 0.657\ntestset: URL, BLEU: 40.7, chr-F: 0.619\ntestset: URL, BLEU: 45.2, chr-F: 0.642\ntestset: URL, BLEU: 42.7, chr-F: 0.622\ntestset: URL, BLEU: 17.9, chr-F: 0.351",
"### System Info:\n\n\n* hf\\_name: eng-zls\n* source\\_languages: eng\n* target\\_languages: zls\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'hr', 'mk', 'bg', 'sl', 'zls']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'hrv', 'mkd', 'srp\\_Latn', 'srp\\_Cyrl', 'bul\\_Latn', 'bul', 'bos\\_Latn', 'slv'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: zls\n* short\\_pair: en-zls\n* chrF2\\_score: 0.622\n* bleu: 42.7\n* brevity\\_penalty: 0.9690000000000001\n* ref\\_len: 64788.0\n* src\\_name: English\n* tgt\\_name: South Slavic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: zls\n* prefer\\_old: False\n* long\\_pair: eng-zls\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
61,
283,
480
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #hr #mk #bg #sl #zls #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-zls\n\n\n* source group: English\n* target group: South Slavic languages\n* OPUS readme: eng-zls\n* model: transformer\n* source language(s): eng\n* target language(s): bos\\_Latn bul bul\\_Latn hrv mkd slv srp\\_Cyrl srp\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.6, chr-F: 0.657\ntestset: URL, BLEU: 40.7, chr-F: 0.619\ntestset: URL, BLEU: 45.2, chr-F: 0.642\ntestset: URL, BLEU: 42.7, chr-F: 0.622\ntestset: URL, BLEU: 17.9, chr-F: 0.351### System Info:\n\n\n* hf\\_name: eng-zls\n* source\\_languages: eng\n* target\\_languages: zls\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'hr', 'mk', 'bg', 'sl', 'zls']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'hrv', 'mkd', 'srp\\_Latn', 'srp\\_Cyrl', 'bul\\_Latn', 'bul', 'bos\\_Latn', 'slv'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: zls\n* short\\_pair: en-zls\n* chrF2\\_score: 0.622\n* bleu: 42.7\n* brevity\\_penalty: 0.9690000000000001\n* ref\\_len: 64788.0\n* src\\_name: English\n* tgt\\_name: South Slavic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: zls\n* prefer\\_old: False\n* long\\_pair: eng-zls\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### eng-zlw
* source group: English
* target group: West Slavic languages
* OPUS readme: [eng-zlw](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zlw/README.md)
* model: transformer
* source language(s): eng
* target language(s): ces csb_Latn dsb hsb pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-02.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.zip)
* test set translations: [opus2m-2020-08-02.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.test.txt)
* test set scores: [opus2m-2020-08-02.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engces.eng.ces | 20.6 | 0.488 |
| news-test2008-engces.eng.ces | 18.3 | 0.466 |
| newstest2009-engces.eng.ces | 19.8 | 0.483 |
| newstest2010-engces.eng.ces | 19.8 | 0.486 |
| newstest2011-engces.eng.ces | 20.6 | 0.489 |
| newstest2012-engces.eng.ces | 18.6 | 0.464 |
| newstest2013-engces.eng.ces | 22.3 | 0.495 |
| newstest2015-encs-engces.eng.ces | 21.7 | 0.502 |
| newstest2016-encs-engces.eng.ces | 24.5 | 0.521 |
| newstest2017-encs-engces.eng.ces | 20.1 | 0.480 |
| newstest2018-encs-engces.eng.ces | 19.9 | 0.483 |
| newstest2019-encs-engces.eng.ces | 21.2 | 0.490 |
| Tatoeba-test.eng-ces.eng.ces | 43.7 | 0.632 |
| Tatoeba-test.eng-csb.eng.csb | 1.2 | 0.188 |
| Tatoeba-test.eng-dsb.eng.dsb | 1.5 | 0.167 |
| Tatoeba-test.eng-hsb.eng.hsb | 5.7 | 0.199 |
| Tatoeba-test.eng.multi | 42.8 | 0.632 |
| Tatoeba-test.eng-pol.eng.pol | 43.2 | 0.641 |
### System Info:
- hf_name: eng-zlw
- source_languages: eng
- target_languages: zlw
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-zlw/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'pl', 'cs', 'zlw']
- src_constituents: {'eng'}
- tgt_constituents: {'csb_Latn', 'dsb', 'hsb', 'pol', 'ces'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-zlw/opus2m-2020-08-02.test.txt
- src_alpha3: eng
- tgt_alpha3: zlw
- short_pair: en-zlw
- chrF2_score: 0.632
- bleu: 42.8
- brevity_penalty: 0.973
- ref_len: 65397.0
- src_name: English
- tgt_name: West Slavic languages
- train_date: 2020-08-02
- src_alpha2: en
- tgt_alpha2: zlw
- prefer_old: False
- long_pair: eng-zlw
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["en", "pl", "cs", "zlw"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en-zlw | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"pl",
"cs",
"zlw",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"pl",
"cs",
"zlw"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #en #pl #cs #zlw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### eng-zlw
* source group: English
* target group: West Slavic languages
* OPUS readme: eng-zlw
* model: transformer
* source language(s): eng
* target language(s): ces csb\_Latn dsb hsb pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 20.6, chr-F: 0.488
testset: URL, BLEU: 18.3, chr-F: 0.466
testset: URL, BLEU: 19.8, chr-F: 0.483
testset: URL, BLEU: 19.8, chr-F: 0.486
testset: URL, BLEU: 20.6, chr-F: 0.489
testset: URL, BLEU: 18.6, chr-F: 0.464
testset: URL, BLEU: 22.3, chr-F: 0.495
testset: URL, BLEU: 21.7, chr-F: 0.502
testset: URL, BLEU: 24.5, chr-F: 0.521
testset: URL, BLEU: 20.1, chr-F: 0.480
testset: URL, BLEU: 19.9, chr-F: 0.483
testset: URL, BLEU: 21.2, chr-F: 0.490
testset: URL, BLEU: 43.7, chr-F: 0.632
testset: URL, BLEU: 1.2, chr-F: 0.188
testset: URL, BLEU: 1.5, chr-F: 0.167
testset: URL, BLEU: 5.7, chr-F: 0.199
testset: URL, BLEU: 42.8, chr-F: 0.632
testset: URL, BLEU: 43.2, chr-F: 0.641
### System Info:
* hf\_name: eng-zlw
* source\_languages: eng
* target\_languages: zlw
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['en', 'pl', 'cs', 'zlw']
* src\_constituents: {'eng'}
* tgt\_constituents: {'csb\_Latn', 'dsb', 'hsb', 'pol', 'ces'}
* src\_multilingual: False
* tgt\_multilingual: True
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: eng
* tgt\_alpha3: zlw
* short\_pair: en-zlw
* chrF2\_score: 0.632
* bleu: 42.8
* brevity\_penalty: 0.973
* ref\_len: 65397.0
* src\_name: English
* tgt\_name: West Slavic languages
* train\_date: 2020-08-02
* src\_alpha2: en
* tgt\_alpha2: zlw
* prefer\_old: False
* long\_pair: eng-zlw
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### eng-zlw\n\n\n* source group: English\n* target group: West Slavic languages\n* OPUS readme: eng-zlw\n* model: transformer\n* source language(s): eng\n* target language(s): ces csb\\_Latn dsb hsb pol\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.6, chr-F: 0.488\ntestset: URL, BLEU: 18.3, chr-F: 0.466\ntestset: URL, BLEU: 19.8, chr-F: 0.483\ntestset: URL, BLEU: 19.8, chr-F: 0.486\ntestset: URL, BLEU: 20.6, chr-F: 0.489\ntestset: URL, BLEU: 18.6, chr-F: 0.464\ntestset: URL, BLEU: 22.3, chr-F: 0.495\ntestset: URL, BLEU: 21.7, chr-F: 0.502\ntestset: URL, BLEU: 24.5, chr-F: 0.521\ntestset: URL, BLEU: 20.1, chr-F: 0.480\ntestset: URL, BLEU: 19.9, chr-F: 0.483\ntestset: URL, BLEU: 21.2, chr-F: 0.490\ntestset: URL, BLEU: 43.7, chr-F: 0.632\ntestset: URL, BLEU: 1.2, chr-F: 0.188\ntestset: URL, BLEU: 1.5, chr-F: 0.167\ntestset: URL, BLEU: 5.7, chr-F: 0.199\ntestset: URL, BLEU: 42.8, chr-F: 0.632\ntestset: URL, BLEU: 43.2, chr-F: 0.641",
"### System Info:\n\n\n* hf\\_name: eng-zlw\n* source\\_languages: eng\n* target\\_languages: zlw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'pl', 'cs', 'zlw']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'csb\\_Latn', 'dsb', 'hsb', 'pol', 'ces'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: zlw\n* short\\_pair: en-zlw\n* chrF2\\_score: 0.632\n* bleu: 42.8\n* brevity\\_penalty: 0.973\n* ref\\_len: 65397.0\n* src\\_name: English\n* tgt\\_name: West Slavic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: zlw\n* prefer\\_old: False\n* long\\_pair: eng-zlw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #pl #cs #zlw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### eng-zlw\n\n\n* source group: English\n* target group: West Slavic languages\n* OPUS readme: eng-zlw\n* model: transformer\n* source language(s): eng\n* target language(s): ces csb\\_Latn dsb hsb pol\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.6, chr-F: 0.488\ntestset: URL, BLEU: 18.3, chr-F: 0.466\ntestset: URL, BLEU: 19.8, chr-F: 0.483\ntestset: URL, BLEU: 19.8, chr-F: 0.486\ntestset: URL, BLEU: 20.6, chr-F: 0.489\ntestset: URL, BLEU: 18.6, chr-F: 0.464\ntestset: URL, BLEU: 22.3, chr-F: 0.495\ntestset: URL, BLEU: 21.7, chr-F: 0.502\ntestset: URL, BLEU: 24.5, chr-F: 0.521\ntestset: URL, BLEU: 20.1, chr-F: 0.480\ntestset: URL, BLEU: 19.9, chr-F: 0.483\ntestset: URL, BLEU: 21.2, chr-F: 0.490\ntestset: URL, BLEU: 43.7, chr-F: 0.632\ntestset: URL, BLEU: 1.2, chr-F: 0.188\ntestset: URL, BLEU: 1.5, chr-F: 0.167\ntestset: URL, BLEU: 5.7, chr-F: 0.199\ntestset: URL, BLEU: 42.8, chr-F: 0.632\ntestset: URL, BLEU: 43.2, chr-F: 0.641",
"### System Info:\n\n\n* hf\\_name: eng-zlw\n* source\\_languages: eng\n* target\\_languages: zlw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'pl', 'cs', 'zlw']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'csb\\_Latn', 'dsb', 'hsb', 'pol', 'ces'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: zlw\n* short\\_pair: en-zlw\n* chrF2\\_score: 0.632\n* bleu: 42.8\n* brevity\\_penalty: 0.973\n* ref\\_len: 65397.0\n* src\\_name: English\n* tgt\\_name: West Slavic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: zlw\n* prefer\\_old: False\n* long\\_pair: eng-zlw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
57,
557,
441
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #en #pl #cs #zlw #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### eng-zlw\n\n\n* source group: English\n* target group: West Slavic languages\n* OPUS readme: eng-zlw\n* model: transformer\n* source language(s): eng\n* target language(s): ces csb\\_Latn dsb hsb pol\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.6, chr-F: 0.488\ntestset: URL, BLEU: 18.3, chr-F: 0.466\ntestset: URL, BLEU: 19.8, chr-F: 0.483\ntestset: URL, BLEU: 19.8, chr-F: 0.486\ntestset: URL, BLEU: 20.6, chr-F: 0.489\ntestset: URL, BLEU: 18.6, chr-F: 0.464\ntestset: URL, BLEU: 22.3, chr-F: 0.495\ntestset: URL, BLEU: 21.7, chr-F: 0.502\ntestset: URL, BLEU: 24.5, chr-F: 0.521\ntestset: URL, BLEU: 20.1, chr-F: 0.480\ntestset: URL, BLEU: 19.9, chr-F: 0.483\ntestset: URL, BLEU: 21.2, chr-F: 0.490\ntestset: URL, BLEU: 43.7, chr-F: 0.632\ntestset: URL, BLEU: 1.2, chr-F: 0.188\ntestset: URL, BLEU: 1.5, chr-F: 0.167\ntestset: URL, BLEU: 5.7, chr-F: 0.199\ntestset: URL, BLEU: 42.8, chr-F: 0.632\ntestset: URL, BLEU: 43.2, chr-F: 0.641### System Info:\n\n\n* hf\\_name: eng-zlw\n* source\\_languages: eng\n* target\\_languages: zlw\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['en', 'pl', 'cs', 'zlw']\n* src\\_constituents: {'eng'}\n* tgt\\_constituents: {'csb\\_Latn', 'dsb', 'hsb', 'pol', 'ces'}\n* src\\_multilingual: False\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: eng\n* tgt\\_alpha3: zlw\n* short\\_pair: en-zlw\n* chrF2\\_score: 0.632\n* bleu: 42.8\n* brevity\\_penalty: 0.973\n* ref\\_len: 65397.0\n* src\\_name: English\n* tgt\\_name: West Slavic languages\n* train\\_date: 2020-08-02\n* src\\_alpha2: en\n* tgt\\_alpha2: zlw\n* prefer\\_old: False\n* long\\_pair: eng-zlw\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-en_el_es_fi-en_el_es_fi
* source languages: en,el,es,fi
* target languages: en,el,es,fi
* OPUS readme: [en+el+es+fi-en+el+es+fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en+el+es+fi-en+el+es+fi/README.md)
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-03-02.zip](https://object.pouta.csc.fi/OPUS-MT-models/en+el+es+fi-en+el+es+fi/opus-2020-03-02.zip)
* test set translations: [opus-2020-03-02.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en+el+es+fi-en+el+es+fi/opus-2020-03-02.test.txt)
* test set scores: [opus-2020-03-02.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en+el+es+fi-en+el+es+fi/opus-2020-03-02.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2015-enfi.en.fi | 16.0 | 0.498 |
| newssyscomb2009.en.es | 29.9 | 0.570 |
| newssyscomb2009.es.en | 29.7 | 0.569 |
| news-test2008.en.es | 27.3 | 0.549 |
| news-test2008.es.en | 27.3 | 0.548 |
| newstest2009.en.es | 28.4 | 0.564 |
| newstest2009.es.en | 28.4 | 0.564 |
| newstest2010.en.es | 34.0 | 0.599 |
| newstest2010.es.en | 34.0 | 0.599 |
| newstest2011.en.es | 35.1 | 0.600 |
| newstest2012.en.es | 35.4 | 0.602 |
| newstest2013.en.es | 31.9 | 0.576 |
| newstest2015-enfi.en.fi | 17.8 | 0.509 |
| newstest2016-enfi.en.fi | 19.0 | 0.521 |
| newstest2017-enfi.en.fi | 21.2 | 0.539 |
| newstest2018-enfi.en.fi | 13.9 | 0.478 |
| newstest2019-enfi.en.fi | 18.8 | 0.503 |
| newstestB2016-enfi.en.fi | 14.9 | 0.491 |
| newstestB2017-enfi.en.fi | 16.9 | 0.503 |
| simplification.en.en | 63.0 | 0.798 |
| Tatoeba.en.fi | 56.7 | 0.719 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-en_el_es_fi-en_el_es_fi | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"el",
"es",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #marian #text2text-generation #translation #en #el #es #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-en\_el\_es\_fi-en\_el\_es\_fi
* source languages: en,el,es,fi
* target languages: en,el,es,fi
* OPUS readme: en+el+es+fi-en+el+es+fi
* dataset: opus
* model: transformer
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 16.0, chr-F: 0.498
testset: URL, BLEU: 29.9, chr-F: 0.570
testset: URL, BLEU: 29.7, chr-F: 0.569
testset: URL, BLEU: 27.3, chr-F: 0.549
testset: URL, BLEU: 27.3, chr-F: 0.548
testset: URL, BLEU: 28.4, chr-F: 0.564
testset: URL, BLEU: 28.4, chr-F: 0.564
testset: URL, BLEU: 34.0, chr-F: 0.599
testset: URL, BLEU: 34.0, chr-F: 0.599
testset: URL, BLEU: 35.1, chr-F: 0.600
testset: URL, BLEU: 35.4, chr-F: 0.602
testset: URL, BLEU: 31.9, chr-F: 0.576
testset: URL, BLEU: 17.8, chr-F: 0.509
testset: URL, BLEU: 19.0, chr-F: 0.521
testset: URL, BLEU: 21.2, chr-F: 0.539
testset: URL, BLEU: 13.9, chr-F: 0.478
testset: URL, BLEU: 18.8, chr-F: 0.503
testset: URL, BLEU: 14.9, chr-F: 0.491
testset: URL, BLEU: 16.9, chr-F: 0.503
testset: URL, BLEU: 63.0, chr-F: 0.798
testset: URL, BLEU: 56.7, chr-F: 0.719
| [
"### opus-mt-en\\_el\\_es\\_fi-en\\_el\\_es\\_fi\n\n\n* source languages: en,el,es,fi\n* target languages: en,el,es,fi\n* OPUS readme: en+el+es+fi-en+el+es+fi\n* dataset: opus\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.0, chr-F: 0.498\ntestset: URL, BLEU: 29.9, chr-F: 0.570\ntestset: URL, BLEU: 29.7, chr-F: 0.569\ntestset: URL, BLEU: 27.3, chr-F: 0.549\ntestset: URL, BLEU: 27.3, chr-F: 0.548\ntestset: URL, BLEU: 28.4, chr-F: 0.564\ntestset: URL, BLEU: 28.4, chr-F: 0.564\ntestset: URL, BLEU: 34.0, chr-F: 0.599\ntestset: URL, BLEU: 34.0, chr-F: 0.599\ntestset: URL, BLEU: 35.1, chr-F: 0.600\ntestset: URL, BLEU: 35.4, chr-F: 0.602\ntestset: URL, BLEU: 31.9, chr-F: 0.576\ntestset: URL, BLEU: 17.8, chr-F: 0.509\ntestset: URL, BLEU: 19.0, chr-F: 0.521\ntestset: URL, BLEU: 21.2, chr-F: 0.539\ntestset: URL, BLEU: 13.9, chr-F: 0.478\ntestset: URL, BLEU: 18.8, chr-F: 0.503\ntestset: URL, BLEU: 14.9, chr-F: 0.491\ntestset: URL, BLEU: 16.9, chr-F: 0.503\ntestset: URL, BLEU: 63.0, chr-F: 0.798\ntestset: URL, BLEU: 56.7, chr-F: 0.719"
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #en #el #es #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-en\\_el\\_es\\_fi-en\\_el\\_es\\_fi\n\n\n* source languages: en,el,es,fi\n* target languages: en,el,es,fi\n* OPUS readme: en+el+es+fi-en+el+es+fi\n* dataset: opus\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.0, chr-F: 0.498\ntestset: URL, BLEU: 29.9, chr-F: 0.570\ntestset: URL, BLEU: 29.7, chr-F: 0.569\ntestset: URL, BLEU: 27.3, chr-F: 0.549\ntestset: URL, BLEU: 27.3, chr-F: 0.548\ntestset: URL, BLEU: 28.4, chr-F: 0.564\ntestset: URL, BLEU: 28.4, chr-F: 0.564\ntestset: URL, BLEU: 34.0, chr-F: 0.599\ntestset: URL, BLEU: 34.0, chr-F: 0.599\ntestset: URL, BLEU: 35.1, chr-F: 0.600\ntestset: URL, BLEU: 35.4, chr-F: 0.602\ntestset: URL, BLEU: 31.9, chr-F: 0.576\ntestset: URL, BLEU: 17.8, chr-F: 0.509\ntestset: URL, BLEU: 19.0, chr-F: 0.521\ntestset: URL, BLEU: 21.2, chr-F: 0.539\ntestset: URL, BLEU: 13.9, chr-F: 0.478\ntestset: URL, BLEU: 18.8, chr-F: 0.503\ntestset: URL, BLEU: 14.9, chr-F: 0.491\ntestset: URL, BLEU: 16.9, chr-F: 0.503\ntestset: URL, BLEU: 63.0, chr-F: 0.798\ntestset: URL, BLEU: 56.7, chr-F: 0.719"
] | [
52,
630
] | [
"TAGS\n#transformers #pytorch #marian #text2text-generation #translation #en #el #es #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-en\\_el\\_es\\_fi-en\\_el\\_es\\_fi\n\n\n* source languages: en,el,es,fi\n* target languages: en,el,es,fi\n* OPUS readme: en+el+es+fi-en+el+es+fi\n* dataset: opus\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.0, chr-F: 0.498\ntestset: URL, BLEU: 29.9, chr-F: 0.570\ntestset: URL, BLEU: 29.7, chr-F: 0.569\ntestset: URL, BLEU: 27.3, chr-F: 0.549\ntestset: URL, BLEU: 27.3, chr-F: 0.548\ntestset: URL, BLEU: 28.4, chr-F: 0.564\ntestset: URL, BLEU: 28.4, chr-F: 0.564\ntestset: URL, BLEU: 34.0, chr-F: 0.599\ntestset: URL, BLEU: 34.0, chr-F: 0.599\ntestset: URL, BLEU: 35.1, chr-F: 0.600\ntestset: URL, BLEU: 35.4, chr-F: 0.602\ntestset: URL, BLEU: 31.9, chr-F: 0.576\ntestset: URL, BLEU: 17.8, chr-F: 0.509\ntestset: URL, BLEU: 19.0, chr-F: 0.521\ntestset: URL, BLEU: 21.2, chr-F: 0.539\ntestset: URL, BLEU: 13.9, chr-F: 0.478\ntestset: URL, BLEU: 18.8, chr-F: 0.503\ntestset: URL, BLEU: 14.9, chr-F: 0.491\ntestset: URL, BLEU: 16.9, chr-F: 0.503\ntestset: URL, BLEU: 63.0, chr-F: 0.798\ntestset: URL, BLEU: 56.7, chr-F: 0.719"
] |
translation | transformers |
### epo-afr
* source group: Esperanto
* target group: Afrikaans
* OPUS readme: [epo-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-afr/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.afr | 19.5 | 0.369 |
### System Info:
- hf_name: epo-afr
- source_languages: epo
- target_languages: afr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-afr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'af']
- src_constituents: {'epo'}
- tgt_constituents: {'afr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-afr/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: afr
- short_pair: eo-af
- chrF2_score: 0.369
- bleu: 19.5
- brevity_penalty: 0.9570000000000001
- ref_len: 8432.0
- src_name: Esperanto
- tgt_name: Afrikaans
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: af
- prefer_old: False
- long_pair: epo-afr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "af"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-af | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"af",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"af"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-afr
* source group: Esperanto
* target group: Afrikaans
* OPUS readme: epo-afr
* model: transformer-align
* source language(s): epo
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 19.5, chr-F: 0.369
### System Info:
* hf\_name: epo-afr
* source\_languages: epo
* target\_languages: afr
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'af']
* src\_constituents: {'epo'}
* tgt\_constituents: {'afr'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: afr
* short\_pair: eo-af
* chrF2\_score: 0.369
* bleu: 19.5
* brevity\_penalty: 0.9570000000000001
* ref\_len: 8432.0
* src\_name: Esperanto
* tgt\_name: Afrikaans
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: af
* prefer\_old: False
* long\_pair: epo-afr
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-afr\n\n\n* source group: Esperanto\n* target group: Afrikaans\n* OPUS readme: epo-afr\n* model: transformer-align\n* source language(s): epo\n* target language(s): afr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.5, chr-F: 0.369",
"### System Info:\n\n\n* hf\\_name: epo-afr\n* source\\_languages: epo\n* target\\_languages: afr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'af']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'afr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: afr\n* short\\_pair: eo-af\n* chrF2\\_score: 0.369\n* bleu: 19.5\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 8432.0\n* src\\_name: Esperanto\n* tgt\\_name: Afrikaans\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: af\n* prefer\\_old: False\n* long\\_pair: epo-afr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-afr\n\n\n* source group: Esperanto\n* target group: Afrikaans\n* OPUS readme: epo-afr\n* model: transformer-align\n* source language(s): epo\n* target language(s): afr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.5, chr-F: 0.369",
"### System Info:\n\n\n* hf\\_name: epo-afr\n* source\\_languages: epo\n* target\\_languages: afr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'af']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'afr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: afr\n* short\\_pair: eo-af\n* chrF2\\_score: 0.369\n* bleu: 19.5\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 8432.0\n* src\\_name: Esperanto\n* tgt\\_name: Afrikaans\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: af\n* prefer\\_old: False\n* long\\_pair: epo-afr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
139,
412
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-afr\n\n\n* source group: Esperanto\n* target group: Afrikaans\n* OPUS readme: epo-afr\n* model: transformer-align\n* source language(s): epo\n* target language(s): afr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.5, chr-F: 0.369### System Info:\n\n\n* hf\\_name: epo-afr\n* source\\_languages: epo\n* target\\_languages: afr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'af']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'afr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: afr\n* short\\_pair: eo-af\n* chrF2\\_score: 0.369\n* bleu: 19.5\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 8432.0\n* src\\_name: Esperanto\n* tgt\\_name: Afrikaans\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: af\n* prefer\\_old: False\n* long\\_pair: epo-afr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### epo-bul
* source group: Esperanto
* target group: Bulgarian
* OPUS readme: [epo-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-bul/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): bul
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.bul | 19.0 | 0.395 |
### System Info:
- hf_name: epo-bul
- source_languages: epo
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'bg']
- src_constituents: {'epo'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-bul/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: bul
- short_pair: eo-bg
- chrF2_score: 0.395
- bleu: 19.0
- brevity_penalty: 0.8909999999999999
- ref_len: 3961.0
- src_name: Esperanto
- tgt_name: Bulgarian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: bg
- prefer_old: False
- long_pair: epo-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "bg"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-bg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"bg"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-bul
* source group: Esperanto
* target group: Bulgarian
* OPUS readme: epo-bul
* model: transformer-align
* source language(s): epo
* target language(s): bul
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 19.0, chr-F: 0.395
### System Info:
* hf\_name: epo-bul
* source\_languages: epo
* target\_languages: bul
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'bg']
* src\_constituents: {'epo'}
* tgt\_constituents: {'bul', 'bul\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: bul
* short\_pair: eo-bg
* chrF2\_score: 0.395
* bleu: 19.0
* brevity\_penalty: 0.8909999999999999
* ref\_len: 3961.0
* src\_name: Esperanto
* tgt\_name: Bulgarian
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: bg
* prefer\_old: False
* long\_pair: epo-bul
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-bul\n\n\n* source group: Esperanto\n* target group: Bulgarian\n* OPUS readme: epo-bul\n* model: transformer-align\n* source language(s): epo\n* target language(s): bul\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.0, chr-F: 0.395",
"### System Info:\n\n\n* hf\\_name: epo-bul\n* source\\_languages: epo\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'bg']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: bul\n* short\\_pair: eo-bg\n* chrF2\\_score: 0.395\n* bleu: 19.0\n* brevity\\_penalty: 0.8909999999999999\n* ref\\_len: 3961.0\n* src\\_name: Esperanto\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: epo-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-bul\n\n\n* source group: Esperanto\n* target group: Bulgarian\n* OPUS readme: epo-bul\n* model: transformer-align\n* source language(s): epo\n* target language(s): bul\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.0, chr-F: 0.395",
"### System Info:\n\n\n* hf\\_name: epo-bul\n* source\\_languages: epo\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'bg']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: bul\n* short\\_pair: eo-bg\n* chrF2\\_score: 0.395\n* bleu: 19.0\n* brevity\\_penalty: 0.8909999999999999\n* ref\\_len: 3961.0\n* src\\_name: Esperanto\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: epo-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
53,
138,
432
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-bul\n\n\n* source group: Esperanto\n* target group: Bulgarian\n* OPUS readme: epo-bul\n* model: transformer-align\n* source language(s): epo\n* target language(s): bul\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.0, chr-F: 0.395### System Info:\n\n\n* hf\\_name: epo-bul\n* source\\_languages: epo\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'bg']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: bul\n* short\\_pair: eo-bg\n* chrF2\\_score: 0.395\n* bleu: 19.0\n* brevity\\_penalty: 0.8909999999999999\n* ref\\_len: 3961.0\n* src\\_name: Esperanto\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: epo-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### epo-ces
* source group: Esperanto
* target group: Czech
* OPUS readme: [epo-ces](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ces/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ces
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ces | 17.5 | 0.376 |
### System Info:
- hf_name: epo-ces
- source_languages: epo
- target_languages: ces
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ces/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'cs']
- src_constituents: {'epo'}
- tgt_constituents: {'ces'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ces/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ces
- short_pair: eo-cs
- chrF2_score: 0.376
- bleu: 17.5
- brevity_penalty: 0.922
- ref_len: 22148.0
- src_name: Esperanto
- tgt_name: Czech
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: cs
- prefer_old: False
- long_pair: epo-ces
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "cs"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-cs | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"cs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"cs"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #cs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-ces
* source group: Esperanto
* target group: Czech
* OPUS readme: epo-ces
* model: transformer-align
* source language(s): epo
* target language(s): ces
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 17.5, chr-F: 0.376
### System Info:
* hf\_name: epo-ces
* source\_languages: epo
* target\_languages: ces
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'cs']
* src\_constituents: {'epo'}
* tgt\_constituents: {'ces'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: ces
* short\_pair: eo-cs
* chrF2\_score: 0.376
* bleu: 17.5
* brevity\_penalty: 0.922
* ref\_len: 22148.0
* src\_name: Esperanto
* tgt\_name: Czech
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: cs
* prefer\_old: False
* long\_pair: epo-ces
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-ces\n\n\n* source group: Esperanto\n* target group: Czech\n* OPUS readme: epo-ces\n* model: transformer-align\n* source language(s): epo\n* target language(s): ces\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.5, chr-F: 0.376",
"### System Info:\n\n\n* hf\\_name: epo-ces\n* source\\_languages: epo\n* target\\_languages: ces\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'cs']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'ces'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: ces\n* short\\_pair: eo-cs\n* chrF2\\_score: 0.376\n* bleu: 17.5\n* brevity\\_penalty: 0.922\n* ref\\_len: 22148.0\n* src\\_name: Esperanto\n* tgt\\_name: Czech\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: cs\n* prefer\\_old: False\n* long\\_pair: epo-ces\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #cs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-ces\n\n\n* source group: Esperanto\n* target group: Czech\n* OPUS readme: epo-ces\n* model: transformer-align\n* source language(s): epo\n* target language(s): ces\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.5, chr-F: 0.376",
"### System Info:\n\n\n* hf\\_name: epo-ces\n* source\\_languages: epo\n* target\\_languages: ces\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'cs']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'ces'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: ces\n* short\\_pair: eo-cs\n* chrF2\\_score: 0.376\n* bleu: 17.5\n* brevity\\_penalty: 0.922\n* ref\\_len: 22148.0\n* src\\_name: Esperanto\n* tgt\\_name: Czech\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: cs\n* prefer\\_old: False\n* long\\_pair: epo-ces\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
139,
406
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #cs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-ces\n\n\n* source group: Esperanto\n* target group: Czech\n* OPUS readme: epo-ces\n* model: transformer-align\n* source language(s): epo\n* target language(s): ces\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.5, chr-F: 0.376### System Info:\n\n\n* hf\\_name: epo-ces\n* source\\_languages: epo\n* target\\_languages: ces\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'cs']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'ces'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: ces\n* short\\_pair: eo-cs\n* chrF2\\_score: 0.376\n* bleu: 17.5\n* brevity\\_penalty: 0.922\n* ref\\_len: 22148.0\n* src\\_name: Esperanto\n* tgt\\_name: Czech\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: cs\n* prefer\\_old: False\n* long\\_pair: epo-ces\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### epo-dan
* source group: Esperanto
* target group: Danish
* OPUS readme: [epo-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-dan/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.dan | 21.6 | 0.407 |
### System Info:
- hf_name: epo-dan
- source_languages: epo
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'da']
- src_constituents: {'epo'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-dan/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: dan
- short_pair: eo-da
- chrF2_score: 0.40700000000000003
- bleu: 21.6
- brevity_penalty: 0.9359999999999999
- ref_len: 72349.0
- src_name: Esperanto
- tgt_name: Danish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: da
- prefer_old: False
- long_pair: epo-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "da"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-da | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"da"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-dan
* source group: Esperanto
* target group: Danish
* OPUS readme: epo-dan
* model: transformer-align
* source language(s): epo
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.6, chr-F: 0.407
### System Info:
* hf\_name: epo-dan
* source\_languages: epo
* target\_languages: dan
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'da']
* src\_constituents: {'epo'}
* tgt\_constituents: {'dan'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: dan
* short\_pair: eo-da
* chrF2\_score: 0.40700000000000003
* bleu: 21.6
* brevity\_penalty: 0.9359999999999999
* ref\_len: 72349.0
* src\_name: Esperanto
* tgt\_name: Danish
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: da
* prefer\_old: False
* long\_pair: epo-dan
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-dan\n\n\n* source group: Esperanto\n* target group: Danish\n* OPUS readme: epo-dan\n* model: transformer-align\n* source language(s): epo\n* target language(s): dan\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.6, chr-F: 0.407",
"### System Info:\n\n\n* hf\\_name: epo-dan\n* source\\_languages: epo\n* target\\_languages: dan\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'da']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'dan'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: dan\n* short\\_pair: eo-da\n* chrF2\\_score: 0.40700000000000003\n* bleu: 21.6\n* brevity\\_penalty: 0.9359999999999999\n* ref\\_len: 72349.0\n* src\\_name: Esperanto\n* tgt\\_name: Danish\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: da\n* prefer\\_old: False\n* long\\_pair: epo-dan\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-dan\n\n\n* source group: Esperanto\n* target group: Danish\n* OPUS readme: epo-dan\n* model: transformer-align\n* source language(s): epo\n* target language(s): dan\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.6, chr-F: 0.407",
"### System Info:\n\n\n* hf\\_name: epo-dan\n* source\\_languages: epo\n* target\\_languages: dan\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'da']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'dan'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: dan\n* short\\_pair: eo-da\n* chrF2\\_score: 0.40700000000000003\n* bleu: 21.6\n* brevity\\_penalty: 0.9359999999999999\n* ref\\_len: 72349.0\n* src\\_name: Esperanto\n* tgt\\_name: Danish\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: da\n* prefer\\_old: False\n* long\\_pair: epo-dan\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
135,
421
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-dan\n\n\n* source group: Esperanto\n* target group: Danish\n* OPUS readme: epo-dan\n* model: transformer-align\n* source language(s): epo\n* target language(s): dan\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.6, chr-F: 0.407### System Info:\n\n\n* hf\\_name: epo-dan\n* source\\_languages: epo\n* target\\_languages: dan\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'da']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'dan'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: dan\n* short\\_pair: eo-da\n* chrF2\\_score: 0.40700000000000003\n* bleu: 21.6\n* brevity\\_penalty: 0.9359999999999999\n* ref\\_len: 72349.0\n* src\\_name: Esperanto\n* tgt\\_name: Danish\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: da\n* prefer\\_old: False\n* long\\_pair: epo-dan\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-eo-de
* source languages: eo
* target languages: de
* OPUS readme: [eo-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.de | 45.5 | 0.644 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-de | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-eo-de
* source languages: eo
* target languages: de
* OPUS readme: eo-de
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 45.5, chr-F: 0.644
| [
"### opus-mt-eo-de\n\n\n* source languages: eo\n* target languages: de\n* OPUS readme: eo-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.5, chr-F: 0.644"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-eo-de\n\n\n* source languages: eo\n* target languages: de\n* OPUS readme: eo-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.5, chr-F: 0.644"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-eo-de\n\n\n* source languages: eo\n* target languages: de\n* OPUS readme: eo-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.5, chr-F: 0.644"
] |
translation | transformers |
### epo-ell
* source group: Esperanto
* target group: Modern Greek (1453-)
* OPUS readme: [epo-ell](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ell/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ell
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ell | 23.2 | 0.438 |
### System Info:
- hf_name: epo-ell
- source_languages: epo
- target_languages: ell
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ell/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'el']
- src_constituents: {'epo'}
- tgt_constituents: {'ell'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ell/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ell
- short_pair: eo-el
- chrF2_score: 0.43799999999999994
- bleu: 23.2
- brevity_penalty: 0.9159999999999999
- ref_len: 3892.0
- src_name: Esperanto
- tgt_name: Modern Greek (1453-)
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: el
- prefer_old: False
- long_pair: epo-ell
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "el"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-el | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"el",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"el"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #el #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-ell
* source group: Esperanto
* target group: Modern Greek (1453-)
* OPUS readme: epo-ell
* model: transformer-align
* source language(s): epo
* target language(s): ell
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.2, chr-F: 0.438
### System Info:
* hf\_name: epo-ell
* source\_languages: epo
* target\_languages: ell
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'el']
* src\_constituents: {'epo'}
* tgt\_constituents: {'ell'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: ell
* short\_pair: eo-el
* chrF2\_score: 0.43799999999999994
* bleu: 23.2
* brevity\_penalty: 0.9159999999999999
* ref\_len: 3892.0
* src\_name: Esperanto
* tgt\_name: Modern Greek (1453-)
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: el
* prefer\_old: False
* long\_pair: epo-ell
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-ell\n\n\n* source group: Esperanto\n* target group: Modern Greek (1453-)\n* OPUS readme: epo-ell\n* model: transformer-align\n* source language(s): epo\n* target language(s): ell\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.438",
"### System Info:\n\n\n* hf\\_name: epo-ell\n* source\\_languages: epo\n* target\\_languages: ell\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'el']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'ell'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: ell\n* short\\_pair: eo-el\n* chrF2\\_score: 0.43799999999999994\n* bleu: 23.2\n* brevity\\_penalty: 0.9159999999999999\n* ref\\_len: 3892.0\n* src\\_name: Esperanto\n* tgt\\_name: Modern Greek (1453-)\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: el\n* prefer\\_old: False\n* long\\_pair: epo-ell\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #el #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-ell\n\n\n* source group: Esperanto\n* target group: Modern Greek (1453-)\n* OPUS readme: epo-ell\n* model: transformer-align\n* source language(s): epo\n* target language(s): ell\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.438",
"### System Info:\n\n\n* hf\\_name: epo-ell\n* source\\_languages: epo\n* target\\_languages: ell\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'el']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'ell'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: ell\n* short\\_pair: eo-el\n* chrF2\\_score: 0.43799999999999994\n* bleu: 23.2\n* brevity\\_penalty: 0.9159999999999999\n* ref\\_len: 3892.0\n* src\\_name: Esperanto\n* tgt\\_name: Modern Greek (1453-)\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: el\n* prefer\\_old: False\n* long\\_pair: epo-ell\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
145,
439
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #el #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-ell\n\n\n* source group: Esperanto\n* target group: Modern Greek (1453-)\n* OPUS readme: epo-ell\n* model: transformer-align\n* source language(s): epo\n* target language(s): ell\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.438### System Info:\n\n\n* hf\\_name: epo-ell\n* source\\_languages: epo\n* target\\_languages: ell\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'el']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'ell'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: ell\n* short\\_pair: eo-el\n* chrF2\\_score: 0.43799999999999994\n* bleu: 23.2\n* brevity\\_penalty: 0.9159999999999999\n* ref\\_len: 3892.0\n* src\\_name: Esperanto\n* tgt\\_name: Modern Greek (1453-)\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: el\n* prefer\\_old: False\n* long\\_pair: epo-ell\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-eo-en
* source languages: eo
* target languages: en
* OPUS readme: [eo-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.en | 54.8 | 0.694 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-en | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-eo-en
* source languages: eo
* target languages: en
* OPUS readme: eo-en
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 54.8, chr-F: 0.694
| [
"### opus-mt-eo-en\n\n\n* source languages: eo\n* target languages: en\n* OPUS readme: eo-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.8, chr-F: 0.694"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-eo-en\n\n\n* source languages: eo\n* target languages: en\n* OPUS readme: eo-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.8, chr-F: 0.694"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-eo-en\n\n\n* source languages: eo\n* target languages: en\n* OPUS readme: eo-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.8, chr-F: 0.694"
] |
translation | transformers |
### opus-mt-eo-es
* source languages: eo
* target languages: es
* OPUS readme: [eo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.es | 44.2 | 0.631 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-es | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-eo-es
* source languages: eo
* target languages: es
* OPUS readme: eo-es
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 44.2, chr-F: 0.631
| [
"### opus-mt-eo-es\n\n\n* source languages: eo\n* target languages: es\n* OPUS readme: eo-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.2, chr-F: 0.631"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-eo-es\n\n\n* source languages: eo\n* target languages: es\n* OPUS readme: eo-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.2, chr-F: 0.631"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-eo-es\n\n\n* source languages: eo\n* target languages: es\n* OPUS readme: eo-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.2, chr-F: 0.631"
] |
translation | transformers |
### epo-fin
* source group: Esperanto
* target group: Finnish
* OPUS readme: [epo-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-fin/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.fin | 15.9 | 0.371 |
### System Info:
- hf_name: epo-fin
- source_languages: epo
- target_languages: fin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-fin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'fi']
- src_constituents: {'epo'}
- tgt_constituents: {'fin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-fin/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: fin
- short_pair: eo-fi
- chrF2_score: 0.371
- bleu: 15.9
- brevity_penalty: 0.894
- ref_len: 15881.0
- src_name: Esperanto
- tgt_name: Finnish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: fi
- prefer_old: False
- long_pair: epo-fin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "fi"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-fi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"fi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"fi"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-fin
* source group: Esperanto
* target group: Finnish
* OPUS readme: epo-fin
* model: transformer-align
* source language(s): epo
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 15.9, chr-F: 0.371
### System Info:
* hf\_name: epo-fin
* source\_languages: epo
* target\_languages: fin
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'fi']
* src\_constituents: {'epo'}
* tgt\_constituents: {'fin'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: fin
* short\_pair: eo-fi
* chrF2\_score: 0.371
* bleu: 15.9
* brevity\_penalty: 0.894
* ref\_len: 15881.0
* src\_name: Esperanto
* tgt\_name: Finnish
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: fi
* prefer\_old: False
* long\_pair: epo-fin
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-fin\n\n\n* source group: Esperanto\n* target group: Finnish\n* OPUS readme: epo-fin\n* model: transformer-align\n* source language(s): epo\n* target language(s): fin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 15.9, chr-F: 0.371",
"### System Info:\n\n\n* hf\\_name: epo-fin\n* source\\_languages: epo\n* target\\_languages: fin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'fi']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'fin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: fin\n* short\\_pair: eo-fi\n* chrF2\\_score: 0.371\n* bleu: 15.9\n* brevity\\_penalty: 0.894\n* ref\\_len: 15881.0\n* src\\_name: Esperanto\n* tgt\\_name: Finnish\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: fi\n* prefer\\_old: False\n* long\\_pair: epo-fin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-fin\n\n\n* source group: Esperanto\n* target group: Finnish\n* OPUS readme: epo-fin\n* model: transformer-align\n* source language(s): epo\n* target language(s): fin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 15.9, chr-F: 0.371",
"### System Info:\n\n\n* hf\\_name: epo-fin\n* source\\_languages: epo\n* target\\_languages: fin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'fi']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'fin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: fin\n* short\\_pair: eo-fi\n* chrF2\\_score: 0.371\n* bleu: 15.9\n* brevity\\_penalty: 0.894\n* ref\\_len: 15881.0\n* src\\_name: Esperanto\n* tgt\\_name: Finnish\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: fi\n* prefer\\_old: False\n* long\\_pair: epo-fin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
136,
402
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-fin\n\n\n* source group: Esperanto\n* target group: Finnish\n* OPUS readme: epo-fin\n* model: transformer-align\n* source language(s): epo\n* target language(s): fin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 15.9, chr-F: 0.371### System Info:\n\n\n* hf\\_name: epo-fin\n* source\\_languages: epo\n* target\\_languages: fin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'fi']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'fin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: fin\n* short\\_pair: eo-fi\n* chrF2\\_score: 0.371\n* bleu: 15.9\n* brevity\\_penalty: 0.894\n* ref\\_len: 15881.0\n* src\\_name: Esperanto\n* tgt\\_name: Finnish\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: fi\n* prefer\\_old: False\n* long\\_pair: epo-fin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-eo-fr
* source languages: eo
* target languages: fr
* OPUS readme: [eo-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/eo-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/eo-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/eo-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.eo.fr | 50.9 | 0.675 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-fr | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-eo-fr
* source languages: eo
* target languages: fr
* OPUS readme: eo-fr
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 50.9, chr-F: 0.675
| [
"### opus-mt-eo-fr\n\n\n* source languages: eo\n* target languages: fr\n* OPUS readme: eo-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.9, chr-F: 0.675"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-eo-fr\n\n\n* source languages: eo\n* target languages: fr\n* OPUS readme: eo-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.9, chr-F: 0.675"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-eo-fr\n\n\n* source languages: eo\n* target languages: fr\n* OPUS readme: eo-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.9, chr-F: 0.675"
] |
translation | transformers |
### epo-heb
* source group: Esperanto
* target group: Hebrew
* OPUS readme: [epo-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-heb/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.heb | 11.5 | 0.306 |
### System Info:
- hf_name: epo-heb
- source_languages: epo
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'he']
- src_constituents: {'epo'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-heb/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: heb
- short_pair: eo-he
- chrF2_score: 0.306
- bleu: 11.5
- brevity_penalty: 0.943
- ref_len: 65645.0
- src_name: Esperanto
- tgt_name: Hebrew
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: he
- prefer_old: False
- long_pair: epo-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "he"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-he | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"he"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-heb
* source group: Esperanto
* target group: Hebrew
* OPUS readme: epo-heb
* model: transformer-align
* source language(s): epo
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 11.5, chr-F: 0.306
### System Info:
* hf\_name: epo-heb
* source\_languages: epo
* target\_languages: heb
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'he']
* src\_constituents: {'epo'}
* tgt\_constituents: {'heb'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: heb
* short\_pair: eo-he
* chrF2\_score: 0.306
* bleu: 11.5
* brevity\_penalty: 0.943
* ref\_len: 65645.0
* src\_name: Esperanto
* tgt\_name: Hebrew
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: he
* prefer\_old: False
* long\_pair: epo-heb
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-heb\n\n\n* source group: Esperanto\n* target group: Hebrew\n* OPUS readme: epo-heb\n* model: transformer-align\n* source language(s): epo\n* target language(s): heb\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.5, chr-F: 0.306",
"### System Info:\n\n\n* hf\\_name: epo-heb\n* source\\_languages: epo\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'he']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: heb\n* short\\_pair: eo-he\n* chrF2\\_score: 0.306\n* bleu: 11.5\n* brevity\\_penalty: 0.943\n* ref\\_len: 65645.0\n* src\\_name: Esperanto\n* tgt\\_name: Hebrew\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: epo-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-heb\n\n\n* source group: Esperanto\n* target group: Hebrew\n* OPUS readme: epo-heb\n* model: transformer-align\n* source language(s): epo\n* target language(s): heb\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.5, chr-F: 0.306",
"### System Info:\n\n\n* hf\\_name: epo-heb\n* source\\_languages: epo\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'he']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: heb\n* short\\_pair: eo-he\n* chrF2\\_score: 0.306\n* bleu: 11.5\n* brevity\\_penalty: 0.943\n* ref\\_len: 65645.0\n* src\\_name: Esperanto\n* tgt\\_name: Hebrew\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: epo-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
138,
406
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-heb\n\n\n* source group: Esperanto\n* target group: Hebrew\n* OPUS readme: epo-heb\n* model: transformer-align\n* source language(s): epo\n* target language(s): heb\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.5, chr-F: 0.306### System Info:\n\n\n* hf\\_name: epo-heb\n* source\\_languages: epo\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'he']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: heb\n* short\\_pair: eo-he\n* chrF2\\_score: 0.306\n* bleu: 11.5\n* brevity\\_penalty: 0.943\n* ref\\_len: 65645.0\n* src\\_name: Esperanto\n* tgt\\_name: Hebrew\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: epo-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### epo-hun
* source group: Esperanto
* target group: Hungarian
* OPUS readme: [epo-hun](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-hun/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): hun
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.hun | 12.8 | 0.333 |
### System Info:
- hf_name: epo-hun
- source_languages: epo
- target_languages: hun
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-hun/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'hu']
- src_constituents: {'epo'}
- tgt_constituents: {'hun'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hun/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: hun
- short_pair: eo-hu
- chrF2_score: 0.33299999999999996
- bleu: 12.8
- brevity_penalty: 0.914
- ref_len: 65704.0
- src_name: Esperanto
- tgt_name: Hungarian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: hu
- prefer_old: False
- long_pair: epo-hun
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "hu"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-hu | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"hu",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"hu"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-hun
* source group: Esperanto
* target group: Hungarian
* OPUS readme: epo-hun
* model: transformer-align
* source language(s): epo
* target language(s): hun
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 12.8, chr-F: 0.333
### System Info:
* hf\_name: epo-hun
* source\_languages: epo
* target\_languages: hun
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'hu']
* src\_constituents: {'epo'}
* tgt\_constituents: {'hun'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: hun
* short\_pair: eo-hu
* chrF2\_score: 0.33299999999999996
* bleu: 12.8
* brevity\_penalty: 0.914
* ref\_len: 65704.0
* src\_name: Esperanto
* tgt\_name: Hungarian
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: hu
* prefer\_old: False
* long\_pair: epo-hun
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-hun\n\n\n* source group: Esperanto\n* target group: Hungarian\n* OPUS readme: epo-hun\n* model: transformer-align\n* source language(s): epo\n* target language(s): hun\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.8, chr-F: 0.333",
"### System Info:\n\n\n* hf\\_name: epo-hun\n* source\\_languages: epo\n* target\\_languages: hun\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'hu']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'hun'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: hun\n* short\\_pair: eo-hu\n* chrF2\\_score: 0.33299999999999996\n* bleu: 12.8\n* brevity\\_penalty: 0.914\n* ref\\_len: 65704.0\n* src\\_name: Esperanto\n* tgt\\_name: Hungarian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: hu\n* prefer\\_old: False\n* long\\_pair: epo-hun\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-hun\n\n\n* source group: Esperanto\n* target group: Hungarian\n* OPUS readme: epo-hun\n* model: transformer-align\n* source language(s): epo\n* target language(s): hun\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.8, chr-F: 0.333",
"### System Info:\n\n\n* hf\\_name: epo-hun\n* source\\_languages: epo\n* target\\_languages: hun\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'hu']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'hun'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: hun\n* short\\_pair: eo-hu\n* chrF2\\_score: 0.33299999999999996\n* bleu: 12.8\n* brevity\\_penalty: 0.914\n* ref\\_len: 65704.0\n* src\\_name: Esperanto\n* tgt\\_name: Hungarian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: hu\n* prefer\\_old: False\n* long\\_pair: epo-hun\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
138,
420
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-hun\n\n\n* source group: Esperanto\n* target group: Hungarian\n* OPUS readme: epo-hun\n* model: transformer-align\n* source language(s): epo\n* target language(s): hun\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.8, chr-F: 0.333### System Info:\n\n\n* hf\\_name: epo-hun\n* source\\_languages: epo\n* target\\_languages: hun\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'hu']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'hun'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: hun\n* short\\_pair: eo-hu\n* chrF2\\_score: 0.33299999999999996\n* bleu: 12.8\n* brevity\\_penalty: 0.914\n* ref\\_len: 65704.0\n* src\\_name: Esperanto\n* tgt\\_name: Hungarian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: hu\n* prefer\\_old: False\n* long\\_pair: epo-hun\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### epo-ita
* source group: Esperanto
* target group: Italian
* OPUS readme: [epo-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ita/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ita | 23.8 | 0.465 |
### System Info:
- hf_name: epo-ita
- source_languages: epo
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'it']
- src_constituents: {'epo'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ita/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ita
- short_pair: eo-it
- chrF2_score: 0.465
- bleu: 23.8
- brevity_penalty: 0.9420000000000001
- ref_len: 67118.0
- src_name: Esperanto
- tgt_name: Italian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: it
- prefer_old: False
- long_pair: epo-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "it"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-it | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"it"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-ita
* source group: Esperanto
* target group: Italian
* OPUS readme: epo-ita
* model: transformer-align
* source language(s): epo
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 23.8, chr-F: 0.465
### System Info:
* hf\_name: epo-ita
* source\_languages: epo
* target\_languages: ita
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'it']
* src\_constituents: {'epo'}
* tgt\_constituents: {'ita'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: ita
* short\_pair: eo-it
* chrF2\_score: 0.465
* bleu: 23.8
* brevity\_penalty: 0.9420000000000001
* ref\_len: 67118.0
* src\_name: Esperanto
* tgt\_name: Italian
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: it
* prefer\_old: False
* long\_pair: epo-ita
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-ita\n\n\n* source group: Esperanto\n* target group: Italian\n* OPUS readme: epo-ita\n* model: transformer-align\n* source language(s): epo\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.8, chr-F: 0.465",
"### System Info:\n\n\n* hf\\_name: epo-ita\n* source\\_languages: epo\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'it']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: ita\n* short\\_pair: eo-it\n* chrF2\\_score: 0.465\n* bleu: 23.8\n* brevity\\_penalty: 0.9420000000000001\n* ref\\_len: 67118.0\n* src\\_name: Esperanto\n* tgt\\_name: Italian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: epo-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-ita\n\n\n* source group: Esperanto\n* target group: Italian\n* OPUS readme: epo-ita\n* model: transformer-align\n* source language(s): epo\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.8, chr-F: 0.465",
"### System Info:\n\n\n* hf\\_name: epo-ita\n* source\\_languages: epo\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'it']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: ita\n* short\\_pair: eo-it\n* chrF2\\_score: 0.465\n* bleu: 23.8\n* brevity\\_penalty: 0.9420000000000001\n* ref\\_len: 67118.0\n* src\\_name: Esperanto\n* tgt\\_name: Italian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: epo-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
139,
413
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-ita\n\n\n* source group: Esperanto\n* target group: Italian\n* OPUS readme: epo-ita\n* model: transformer-align\n* source language(s): epo\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.8, chr-F: 0.465### System Info:\n\n\n* hf\\_name: epo-ita\n* source\\_languages: epo\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'it']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: ita\n* short\\_pair: eo-it\n* chrF2\\_score: 0.465\n* bleu: 23.8\n* brevity\\_penalty: 0.9420000000000001\n* ref\\_len: 67118.0\n* src\\_name: Esperanto\n* tgt\\_name: Italian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: epo-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### epo-nld
* source group: Esperanto
* target group: Dutch
* OPUS readme: [epo-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-nld/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.nld | 15.3 | 0.337 |
### System Info:
- hf_name: epo-nld
- source_languages: epo
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'nl']
- src_constituents: {'epo'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-nld/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: nld
- short_pair: eo-nl
- chrF2_score: 0.337
- bleu: 15.3
- brevity_penalty: 0.8640000000000001
- ref_len: 78770.0
- src_name: Esperanto
- tgt_name: Dutch
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: nl
- prefer_old: False
- long_pair: epo-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "nl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-nl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"nl"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-nld
* source group: Esperanto
* target group: Dutch
* OPUS readme: epo-nld
* model: transformer-align
* source language(s): epo
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 15.3, chr-F: 0.337
### System Info:
* hf\_name: epo-nld
* source\_languages: epo
* target\_languages: nld
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'nl']
* src\_constituents: {'epo'}
* tgt\_constituents: {'nld'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: nld
* short\_pair: eo-nl
* chrF2\_score: 0.337
* bleu: 15.3
* brevity\_penalty: 0.8640000000000001
* ref\_len: 78770.0
* src\_name: Esperanto
* tgt\_name: Dutch
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: nl
* prefer\_old: False
* long\_pair: epo-nld
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-nld\n\n\n* source group: Esperanto\n* target group: Dutch\n* OPUS readme: epo-nld\n* model: transformer-align\n* source language(s): epo\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 15.3, chr-F: 0.337",
"### System Info:\n\n\n* hf\\_name: epo-nld\n* source\\_languages: epo\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'nl']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: nld\n* short\\_pair: eo-nl\n* chrF2\\_score: 0.337\n* bleu: 15.3\n* brevity\\_penalty: 0.8640000000000001\n* ref\\_len: 78770.0\n* src\\_name: Esperanto\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: epo-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-nld\n\n\n* source group: Esperanto\n* target group: Dutch\n* OPUS readme: epo-nld\n* model: transformer-align\n* source language(s): epo\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 15.3, chr-F: 0.337",
"### System Info:\n\n\n* hf\\_name: epo-nld\n* source\\_languages: epo\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'nl']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: nld\n* short\\_pair: eo-nl\n* chrF2\\_score: 0.337\n* bleu: 15.3\n* brevity\\_penalty: 0.8640000000000001\n* ref\\_len: 78770.0\n* src\\_name: Esperanto\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: epo-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
138,
412
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-nld\n\n\n* source group: Esperanto\n* target group: Dutch\n* OPUS readme: epo-nld\n* model: transformer-align\n* source language(s): epo\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 15.3, chr-F: 0.337### System Info:\n\n\n* hf\\_name: epo-nld\n* source\\_languages: epo\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'nl']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: nld\n* short\\_pair: eo-nl\n* chrF2\\_score: 0.337\n* bleu: 15.3\n* brevity\\_penalty: 0.8640000000000001\n* ref\\_len: 78770.0\n* src\\_name: Esperanto\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: epo-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### epo-pol
* source group: Esperanto
* target group: Polish
* OPUS readme: [epo-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-pol/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): pol
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.pol | 17.2 | 0.392 |
### System Info:
- hf_name: epo-pol
- source_languages: epo
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'pl']
- src_constituents: {'epo'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-pol/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: pol
- short_pair: eo-pl
- chrF2_score: 0.392
- bleu: 17.2
- brevity_penalty: 0.893
- ref_len: 15343.0
- src_name: Esperanto
- tgt_name: Polish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: pl
- prefer_old: False
- long_pair: epo-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "pl"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-pl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"pl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"pl"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-pol
* source group: Esperanto
* target group: Polish
* OPUS readme: epo-pol
* model: transformer-align
* source language(s): epo
* target language(s): pol
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 17.2, chr-F: 0.392
### System Info:
* hf\_name: epo-pol
* source\_languages: epo
* target\_languages: pol
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'pl']
* src\_constituents: {'epo'}
* tgt\_constituents: {'pol'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: pol
* short\_pair: eo-pl
* chrF2\_score: 0.392
* bleu: 17.2
* brevity\_penalty: 0.893
* ref\_len: 15343.0
* src\_name: Esperanto
* tgt\_name: Polish
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: pl
* prefer\_old: False
* long\_pair: epo-pol
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-pol\n\n\n* source group: Esperanto\n* target group: Polish\n* OPUS readme: epo-pol\n* model: transformer-align\n* source language(s): epo\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.2, chr-F: 0.392",
"### System Info:\n\n\n* hf\\_name: epo-pol\n* source\\_languages: epo\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'pl']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: pol\n* short\\_pair: eo-pl\n* chrF2\\_score: 0.392\n* bleu: 17.2\n* brevity\\_penalty: 0.893\n* ref\\_len: 15343.0\n* src\\_name: Esperanto\n* tgt\\_name: Polish\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: epo-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-pol\n\n\n* source group: Esperanto\n* target group: Polish\n* OPUS readme: epo-pol\n* model: transformer-align\n* source language(s): epo\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.2, chr-F: 0.392",
"### System Info:\n\n\n* hf\\_name: epo-pol\n* source\\_languages: epo\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'pl']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: pol\n* short\\_pair: eo-pl\n* chrF2\\_score: 0.392\n* bleu: 17.2\n* brevity\\_penalty: 0.893\n* ref\\_len: 15343.0\n* src\\_name: Esperanto\n* tgt\\_name: Polish\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: epo-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
136,
401
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-pol\n\n\n* source group: Esperanto\n* target group: Polish\n* OPUS readme: epo-pol\n* model: transformer-align\n* source language(s): epo\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.2, chr-F: 0.392### System Info:\n\n\n* hf\\_name: epo-pol\n* source\\_languages: epo\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'pl']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: pol\n* short\\_pair: eo-pl\n* chrF2\\_score: 0.392\n* bleu: 17.2\n* brevity\\_penalty: 0.893\n* ref\\_len: 15343.0\n* src\\_name: Esperanto\n* tgt\\_name: Polish\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: epo-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### epo-por
* source group: Esperanto
* target group: Portuguese
* OPUS readme: [epo-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-por/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.por | 20.2 | 0.438 |
### System Info:
- hf_name: epo-por
- source_languages: epo
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'pt']
- src_constituents: {'epo'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-por/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: por
- short_pair: eo-pt
- chrF2_score: 0.43799999999999994
- bleu: 20.2
- brevity_penalty: 0.895
- ref_len: 89991.0
- src_name: Esperanto
- tgt_name: Portuguese
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: pt
- prefer_old: False
- long_pair: epo-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "pt"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-pt | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"pt"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-por
* source group: Esperanto
* target group: Portuguese
* OPUS readme: epo-por
* model: transformer-align
* source language(s): epo
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 20.2, chr-F: 0.438
### System Info:
* hf\_name: epo-por
* source\_languages: epo
* target\_languages: por
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'pt']
* src\_constituents: {'epo'}
* tgt\_constituents: {'por'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: por
* short\_pair: eo-pt
* chrF2\_score: 0.43799999999999994
* bleu: 20.2
* brevity\_penalty: 0.895
* ref\_len: 89991.0
* src\_name: Esperanto
* tgt\_name: Portuguese
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: pt
* prefer\_old: False
* long\_pair: epo-por
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-por\n\n\n* source group: Esperanto\n* target group: Portuguese\n* OPUS readme: epo-por\n* model: transformer-align\n* source language(s): epo\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.438",
"### System Info:\n\n\n* hf\\_name: epo-por\n* source\\_languages: epo\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'pt']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: por\n* short\\_pair: eo-pt\n* chrF2\\_score: 0.43799999999999994\n* bleu: 20.2\n* brevity\\_penalty: 0.895\n* ref\\_len: 89991.0\n* src\\_name: Esperanto\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: epo-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-por\n\n\n* source group: Esperanto\n* target group: Portuguese\n* OPUS readme: epo-por\n* model: transformer-align\n* source language(s): epo\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.438",
"### System Info:\n\n\n* hf\\_name: epo-por\n* source\\_languages: epo\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'pt']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: por\n* short\\_pair: eo-pt\n* chrF2\\_score: 0.43799999999999994\n* bleu: 20.2\n* brevity\\_penalty: 0.895\n* ref\\_len: 89991.0\n* src\\_name: Esperanto\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: epo-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
136,
417
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-por\n\n\n* source group: Esperanto\n* target group: Portuguese\n* OPUS readme: epo-por\n* model: transformer-align\n* source language(s): epo\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.438### System Info:\n\n\n* hf\\_name: epo-por\n* source\\_languages: epo\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'pt']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: por\n* short\\_pair: eo-pt\n* chrF2\\_score: 0.43799999999999994\n* bleu: 20.2\n* brevity\\_penalty: 0.895\n* ref\\_len: 89991.0\n* src\\_name: Esperanto\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: epo-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### epo-ron
* source group: Esperanto
* target group: Romanian
* OPUS readme: [epo-ron](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ron/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): ron
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.ron | 19.4 | 0.420 |
### System Info:
- hf_name: epo-ron
- source_languages: epo
- target_languages: ron
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-ron/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'ro']
- src_constituents: {'epo'}
- tgt_constituents: {'ron'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-ron/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: ron
- short_pair: eo-ro
- chrF2_score: 0.42
- bleu: 19.4
- brevity_penalty: 0.9179999999999999
- ref_len: 25619.0
- src_name: Esperanto
- tgt_name: Romanian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: ro
- prefer_old: False
- long_pair: epo-ron
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "ro"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-ro | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"ro"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-ron
* source group: Esperanto
* target group: Romanian
* OPUS readme: epo-ron
* model: transformer-align
* source language(s): epo
* target language(s): ron
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 19.4, chr-F: 0.420
### System Info:
* hf\_name: epo-ron
* source\_languages: epo
* target\_languages: ron
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'ro']
* src\_constituents: {'epo'}
* tgt\_constituents: {'ron'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: ron
* short\_pair: eo-ro
* chrF2\_score: 0.42
* bleu: 19.4
* brevity\_penalty: 0.9179999999999999
* ref\_len: 25619.0
* src\_name: Esperanto
* tgt\_name: Romanian
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: ro
* prefer\_old: False
* long\_pair: epo-ron
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-ron\n\n\n* source group: Esperanto\n* target group: Romanian\n* OPUS readme: epo-ron\n* model: transformer-align\n* source language(s): epo\n* target language(s): ron\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.4, chr-F: 0.420",
"### System Info:\n\n\n* hf\\_name: epo-ron\n* source\\_languages: epo\n* target\\_languages: ron\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'ro']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'ron'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: ron\n* short\\_pair: eo-ro\n* chrF2\\_score: 0.42\n* bleu: 19.4\n* brevity\\_penalty: 0.9179999999999999\n* ref\\_len: 25619.0\n* src\\_name: Esperanto\n* tgt\\_name: Romanian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: ro\n* prefer\\_old: False\n* long\\_pair: epo-ron\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-ron\n\n\n* source group: Esperanto\n* target group: Romanian\n* OPUS readme: epo-ron\n* model: transformer-align\n* source language(s): epo\n* target language(s): ron\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.4, chr-F: 0.420",
"### System Info:\n\n\n* hf\\_name: epo-ron\n* source\\_languages: epo\n* target\\_languages: ron\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'ro']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'ron'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: ron\n* short\\_pair: eo-ro\n* chrF2\\_score: 0.42\n* bleu: 19.4\n* brevity\\_penalty: 0.9179999999999999\n* ref\\_len: 25619.0\n* src\\_name: Esperanto\n* tgt\\_name: Romanian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: ro\n* prefer\\_old: False\n* long\\_pair: epo-ron\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
135,
413
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #ro #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-ron\n\n\n* source group: Esperanto\n* target group: Romanian\n* OPUS readme: epo-ron\n* model: transformer-align\n* source language(s): epo\n* target language(s): ron\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 19.4, chr-F: 0.420### System Info:\n\n\n* hf\\_name: epo-ron\n* source\\_languages: epo\n* target\\_languages: ron\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'ro']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'ron'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: ron\n* short\\_pair: eo-ro\n* chrF2\\_score: 0.42\n* bleu: 19.4\n* brevity\\_penalty: 0.9179999999999999\n* ref\\_len: 25619.0\n* src\\_name: Esperanto\n* tgt\\_name: Romanian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: ro\n* prefer\\_old: False\n* long\\_pair: epo-ron\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### epo-rus
* source group: Esperanto
* target group: Russian
* OPUS readme: [epo-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-rus/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.rus | 17.7 | 0.379 |
### System Info:
- hf_name: epo-rus
- source_languages: epo
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'ru']
- src_constituents: {'epo'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-rus/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: rus
- short_pair: eo-ru
- chrF2_score: 0.379
- bleu: 17.7
- brevity_penalty: 0.9179999999999999
- ref_len: 71288.0
- src_name: Esperanto
- tgt_name: Russian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: ru
- prefer_old: False
- long_pair: epo-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "ru"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-ru | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"ru"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-rus
* source group: Esperanto
* target group: Russian
* OPUS readme: epo-rus
* model: transformer-align
* source language(s): epo
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 17.7, chr-F: 0.379
### System Info:
* hf\_name: epo-rus
* source\_languages: epo
* target\_languages: rus
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'ru']
* src\_constituents: {'epo'}
* tgt\_constituents: {'rus'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: rus
* short\_pair: eo-ru
* chrF2\_score: 0.379
* bleu: 17.7
* brevity\_penalty: 0.9179999999999999
* ref\_len: 71288.0
* src\_name: Esperanto
* tgt\_name: Russian
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: ru
* prefer\_old: False
* long\_pair: epo-rus
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-rus\n\n\n* source group: Esperanto\n* target group: Russian\n* OPUS readme: epo-rus\n* model: transformer-align\n* source language(s): epo\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.7, chr-F: 0.379",
"### System Info:\n\n\n* hf\\_name: epo-rus\n* source\\_languages: epo\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'ru']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: rus\n* short\\_pair: eo-ru\n* chrF2\\_score: 0.379\n* bleu: 17.7\n* brevity\\_penalty: 0.9179999999999999\n* ref\\_len: 71288.0\n* src\\_name: Esperanto\n* tgt\\_name: Russian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: epo-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-rus\n\n\n* source group: Esperanto\n* target group: Russian\n* OPUS readme: epo-rus\n* model: transformer-align\n* source language(s): epo\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.7, chr-F: 0.379",
"### System Info:\n\n\n* hf\\_name: epo-rus\n* source\\_languages: epo\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'ru']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: rus\n* short\\_pair: eo-ru\n* chrF2\\_score: 0.379\n* bleu: 17.7\n* brevity\\_penalty: 0.9179999999999999\n* ref\\_len: 71288.0\n* src\\_name: Esperanto\n* tgt\\_name: Russian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: epo-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
136,
415
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-rus\n\n\n* source group: Esperanto\n* target group: Russian\n* OPUS readme: epo-rus\n* model: transformer-align\n* source language(s): epo\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.7, chr-F: 0.379### System Info:\n\n\n* hf\\_name: epo-rus\n* source\\_languages: epo\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'ru']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: rus\n* short\\_pair: eo-ru\n* chrF2\\_score: 0.379\n* bleu: 17.7\n* brevity\\_penalty: 0.9179999999999999\n* ref\\_len: 71288.0\n* src\\_name: Esperanto\n* tgt\\_name: Russian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: epo-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### epo-hbs
* source group: Esperanto
* target group: Serbo-Croatian
* OPUS readme: [epo-hbs](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-hbs/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): bos_Latn hrv srp_Cyrl srp_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.hbs | 13.6 | 0.351 |
### System Info:
- hf_name: epo-hbs
- source_languages: epo
- target_languages: hbs
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-hbs/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'sh']
- src_constituents: {'epo'}
- tgt_constituents: {'hrv', 'srp_Cyrl', 'bos_Latn', 'srp_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-hbs/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: hbs
- short_pair: eo-sh
- chrF2_score: 0.35100000000000003
- bleu: 13.6
- brevity_penalty: 0.888
- ref_len: 17999.0
- src_name: Esperanto
- tgt_name: Serbo-Croatian
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: sh
- prefer_old: False
- long_pair: epo-hbs
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "sh"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-sh | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"sh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"sh"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #sh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-hbs
* source group: Esperanto
* target group: Serbo-Croatian
* OPUS readme: epo-hbs
* model: transformer-align
* source language(s): epo
* target language(s): bos\_Latn hrv srp\_Cyrl srp\_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 13.6, chr-F: 0.351
### System Info:
* hf\_name: epo-hbs
* source\_languages: epo
* target\_languages: hbs
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'sh']
* src\_constituents: {'epo'}
* tgt\_constituents: {'hrv', 'srp\_Cyrl', 'bos\_Latn', 'srp\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: hbs
* short\_pair: eo-sh
* chrF2\_score: 0.35100000000000003
* bleu: 13.6
* brevity\_penalty: 0.888
* ref\_len: 17999.0
* src\_name: Esperanto
* tgt\_name: Serbo-Croatian
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: sh
* prefer\_old: False
* long\_pair: epo-hbs
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-hbs\n\n\n* source group: Esperanto\n* target group: Serbo-Croatian\n* OPUS readme: epo-hbs\n* model: transformer-align\n* source language(s): epo\n* target language(s): bos\\_Latn hrv srp\\_Cyrl srp\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 13.6, chr-F: 0.351",
"### System Info:\n\n\n* hf\\_name: epo-hbs\n* source\\_languages: epo\n* target\\_languages: hbs\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'sh']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: hbs\n* short\\_pair: eo-sh\n* chrF2\\_score: 0.35100000000000003\n* bleu: 13.6\n* brevity\\_penalty: 0.888\n* ref\\_len: 17999.0\n* src\\_name: Esperanto\n* tgt\\_name: Serbo-Croatian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: sh\n* prefer\\_old: False\n* long\\_pair: epo-hbs\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #sh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-hbs\n\n\n* source group: Esperanto\n* target group: Serbo-Croatian\n* OPUS readme: epo-hbs\n* model: transformer-align\n* source language(s): epo\n* target language(s): bos\\_Latn hrv srp\\_Cyrl srp\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 13.6, chr-F: 0.351",
"### System Info:\n\n\n* hf\\_name: epo-hbs\n* source\\_languages: epo\n* target\\_languages: hbs\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'sh']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: hbs\n* short\\_pair: eo-sh\n* chrF2\\_score: 0.35100000000000003\n* bleu: 13.6\n* brevity\\_penalty: 0.888\n* ref\\_len: 17999.0\n* src\\_name: Esperanto\n* tgt\\_name: Serbo-Croatian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: sh\n* prefer\\_old: False\n* long\\_pair: epo-hbs\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
188,
445
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #sh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-hbs\n\n\n* source group: Esperanto\n* target group: Serbo-Croatian\n* OPUS readme: epo-hbs\n* model: transformer-align\n* source language(s): epo\n* target language(s): bos\\_Latn hrv srp\\_Cyrl srp\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 13.6, chr-F: 0.351### System Info:\n\n\n* hf\\_name: epo-hbs\n* source\\_languages: epo\n* target\\_languages: hbs\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'sh']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: hbs\n* short\\_pair: eo-sh\n* chrF2\\_score: 0.35100000000000003\n* bleu: 13.6\n* brevity\\_penalty: 0.888\n* ref\\_len: 17999.0\n* src\\_name: Esperanto\n* tgt\\_name: Serbo-Croatian\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: sh\n* prefer\\_old: False\n* long\\_pair: epo-hbs\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### epo-swe
* source group: Esperanto
* target group: Swedish
* OPUS readme: [epo-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-swe/README.md)
* model: transformer-align
* source language(s): epo
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.epo.swe | 29.5 | 0.463 |
### System Info:
- hf_name: epo-swe
- source_languages: epo
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/epo-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'sv']
- src_constituents: {'epo'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/epo-swe/opus-2020-06-16.test.txt
- src_alpha3: epo
- tgt_alpha3: swe
- short_pair: eo-sv
- chrF2_score: 0.46299999999999997
- bleu: 29.5
- brevity_penalty: 0.9640000000000001
- ref_len: 10977.0
- src_name: Esperanto
- tgt_name: Swedish
- train_date: 2020-06-16
- src_alpha2: eo
- tgt_alpha2: sv
- prefer_old: False
- long_pair: epo-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["eo", "sv"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-eo-sv | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"eo",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"eo",
"sv"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #eo #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### epo-swe
* source group: Esperanto
* target group: Swedish
* OPUS readme: epo-swe
* model: transformer-align
* source language(s): epo
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 29.5, chr-F: 0.463
### System Info:
* hf\_name: epo-swe
* source\_languages: epo
* target\_languages: swe
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['eo', 'sv']
* src\_constituents: {'epo'}
* tgt\_constituents: {'swe'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm4k,spm4k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: epo
* tgt\_alpha3: swe
* short\_pair: eo-sv
* chrF2\_score: 0.46299999999999997
* bleu: 29.5
* brevity\_penalty: 0.9640000000000001
* ref\_len: 10977.0
* src\_name: Esperanto
* tgt\_name: Swedish
* train\_date: 2020-06-16
* src\_alpha2: eo
* tgt\_alpha2: sv
* prefer\_old: False
* long\_pair: epo-swe
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### epo-swe\n\n\n* source group: Esperanto\n* target group: Swedish\n* OPUS readme: epo-swe\n* model: transformer-align\n* source language(s): epo\n* target language(s): swe\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.5, chr-F: 0.463",
"### System Info:\n\n\n* hf\\_name: epo-swe\n* source\\_languages: epo\n* target\\_languages: swe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'sv']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'swe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: swe\n* short\\_pair: eo-sv\n* chrF2\\_score: 0.46299999999999997\n* bleu: 29.5\n* brevity\\_penalty: 0.9640000000000001\n* ref\\_len: 10977.0\n* src\\_name: Esperanto\n* tgt\\_name: Swedish\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: sv\n* prefer\\_old: False\n* long\\_pair: epo-swe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### epo-swe\n\n\n* source group: Esperanto\n* target group: Swedish\n* OPUS readme: epo-swe\n* model: transformer-align\n* source language(s): epo\n* target language(s): swe\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.5, chr-F: 0.463",
"### System Info:\n\n\n* hf\\_name: epo-swe\n* source\\_languages: epo\n* target\\_languages: swe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'sv']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'swe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: swe\n* short\\_pair: eo-sv\n* chrF2\\_score: 0.46299999999999997\n* bleu: 29.5\n* brevity\\_penalty: 0.9640000000000001\n* ref\\_len: 10977.0\n* src\\_name: Esperanto\n* tgt\\_name: Swedish\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: sv\n* prefer\\_old: False\n* long\\_pair: epo-swe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
139,
426
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #eo #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### epo-swe\n\n\n* source group: Esperanto\n* target group: Swedish\n* OPUS readme: epo-swe\n* model: transformer-align\n* source language(s): epo\n* target language(s): swe\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.5, chr-F: 0.463### System Info:\n\n\n* hf\\_name: epo-swe\n* source\\_languages: epo\n* target\\_languages: swe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['eo', 'sv']\n* src\\_constituents: {'epo'}\n* tgt\\_constituents: {'swe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: epo\n* tgt\\_alpha3: swe\n* short\\_pair: eo-sv\n* chrF2\\_score: 0.46299999999999997\n* bleu: 29.5\n* brevity\\_penalty: 0.9640000000000001\n* ref\\_len: 10977.0\n* src\\_name: Esperanto\n* tgt\\_name: Swedish\n* train\\_date: 2020-06-16\n* src\\_alpha2: eo\n* tgt\\_alpha2: sv\n* prefer\\_old: False\n* long\\_pair: epo-swe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-es-NORWAY
* source languages: es
* target languages: nb_NO,nb,nn_NO,nn,nog,no_nb,no
* OPUS readme: [es-nb_NO+nb+nn_NO+nn+nog+no_nb+no](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.no | 31.6 | 0.523 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-es-NORWAY | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"no",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #es #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-es-NORWAY
* source languages: es
* target languages: nb\_NO,nb,nn\_NO,nn,nog,no\_nb,no
* OPUS readme: es-nb\_NO+nb+nn\_NO+nn+nog+no\_nb+no
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.6, chr-F: 0.523
| [
"### opus-mt-es-NORWAY\n\n\n* source languages: es\n* target languages: nb\\_NO,nb,nn\\_NO,nn,nog,no\\_nb,no\n* OPUS readme: es-nb\\_NO+nb+nn\\_NO+nn+nog+no\\_nb+no\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.523"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-es-NORWAY\n\n\n* source languages: es\n* target languages: nb\\_NO,nb,nn\\_NO,nn,nog,no\\_nb,no\n* OPUS readme: es-nb\\_NO+nb+nn\\_NO+nn+nog+no\\_nb+no\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.523"
] | [
51,
187
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-es-NORWAY\n\n\n* source languages: es\n* target languages: nb\\_NO,nb,nn\\_NO,nn,nog,no\\_nb,no\n* OPUS readme: es-nb\\_NO+nb+nn\\_NO+nn+nog+no\\_nb+no\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.523"
] |
translation | transformers |
### opus-mt-es-aed
* source languages: es
* target languages: aed
* OPUS readme: [es-aed](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-aed/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-aed/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-aed/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-aed/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.aed | 89.2 | 0.915 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-es-aed | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"aed",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #es #aed #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-es-aed
* source languages: es
* target languages: aed
* OPUS readme: es-aed
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 89.2, chr-F: 0.915
| [
"### opus-mt-es-aed\n\n\n* source languages: es\n* target languages: aed\n* OPUS readme: es-aed\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 89.2, chr-F: 0.915"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #aed #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-es-aed\n\n\n* source languages: es\n* target languages: aed\n* OPUS readme: es-aed\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 89.2, chr-F: 0.915"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #aed #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-es-aed\n\n\n* source languages: es\n* target languages: aed\n* OPUS readme: es-aed\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 89.2, chr-F: 0.915"
] |
translation | transformers |
### spa-afr
* source group: Spanish
* target group: Afrikaans
* OPUS readme: [spa-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-afr/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.afr | 55.0 | 0.718 |
### System Info:
- hf_name: spa-afr
- source_languages: spa
- target_languages: afr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-afr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'af']
- src_constituents: {'spa'}
- tgt_constituents: {'afr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-afr/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: afr
- short_pair: es-af
- chrF2_score: 0.718
- bleu: 55.0
- brevity_penalty: 0.9740000000000001
- ref_len: 3044.0
- src_name: Spanish
- tgt_name: Afrikaans
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: af
- prefer_old: False
- long_pair: spa-afr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["es", "af"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-es-af | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"af",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es",
"af"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #es #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### spa-afr
* source group: Spanish
* target group: Afrikaans
* OPUS readme: spa-afr
* model: transformer-align
* source language(s): spa
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 55.0, chr-F: 0.718
### System Info:
* hf\_name: spa-afr
* source\_languages: spa
* target\_languages: afr
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['es', 'af']
* src\_constituents: {'spa'}
* tgt\_constituents: {'afr'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: spa
* tgt\_alpha3: afr
* short\_pair: es-af
* chrF2\_score: 0.718
* bleu: 55.0
* brevity\_penalty: 0.9740000000000001
* ref\_len: 3044.0
* src\_name: Spanish
* tgt\_name: Afrikaans
* train\_date: 2020-06-17
* src\_alpha2: es
* tgt\_alpha2: af
* prefer\_old: False
* long\_pair: spa-afr
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### spa-afr\n\n\n* source group: Spanish\n* target group: Afrikaans\n* OPUS readme: spa-afr\n* model: transformer-align\n* source language(s): spa\n* target language(s): afr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 55.0, chr-F: 0.718",
"### System Info:\n\n\n* hf\\_name: spa-afr\n* source\\_languages: spa\n* target\\_languages: afr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['es', 'af']\n* src\\_constituents: {'spa'}\n* tgt\\_constituents: {'afr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: spa\n* tgt\\_alpha3: afr\n* short\\_pair: es-af\n* chrF2\\_score: 0.718\n* bleu: 55.0\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 3044.0\n* src\\_name: Spanish\n* tgt\\_name: Afrikaans\n* train\\_date: 2020-06-17\n* src\\_alpha2: es\n* tgt\\_alpha2: af\n* prefer\\_old: False\n* long\\_pair: spa-afr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### spa-afr\n\n\n* source group: Spanish\n* target group: Afrikaans\n* OPUS readme: spa-afr\n* model: transformer-align\n* source language(s): spa\n* target language(s): afr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 55.0, chr-F: 0.718",
"### System Info:\n\n\n* hf\\_name: spa-afr\n* source\\_languages: spa\n* target\\_languages: afr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['es', 'af']\n* src\\_constituents: {'spa'}\n* tgt\\_constituents: {'afr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: spa\n* tgt\\_alpha3: afr\n* short\\_pair: es-af\n* chrF2\\_score: 0.718\n* bleu: 55.0\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 3044.0\n* src\\_name: Spanish\n* tgt\\_name: Afrikaans\n* train\\_date: 2020-06-17\n* src\\_alpha2: es\n* tgt\\_alpha2: af\n* prefer\\_old: False\n* long\\_pair: spa-afr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
134,
402
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### spa-afr\n\n\n* source group: Spanish\n* target group: Afrikaans\n* OPUS readme: spa-afr\n* model: transformer-align\n* source language(s): spa\n* target language(s): afr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 55.0, chr-F: 0.718### System Info:\n\n\n* hf\\_name: spa-afr\n* source\\_languages: spa\n* target\\_languages: afr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['es', 'af']\n* src\\_constituents: {'spa'}\n* tgt\\_constituents: {'afr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: spa\n* tgt\\_alpha3: afr\n* short\\_pair: es-af\n* chrF2\\_score: 0.718\n* bleu: 55.0\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 3044.0\n* src\\_name: Spanish\n* tgt\\_name: Afrikaans\n* train\\_date: 2020-06-17\n* src\\_alpha2: es\n* tgt\\_alpha2: af\n* prefer\\_old: False\n* long\\_pair: spa-afr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### spa-ara
* source group: Spanish
* target group: Arabic
* OPUS readme: [spa-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-ara/README.md)
* model: transformer
* source language(s): spa
* target language(s): apc apc_Latn ara arq
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.ara | 20.0 | 0.517 |
### System Info:
- hf_name: spa-ara
- source_languages: spa
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'ar']
- src_constituents: {'spa'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-ara/opus-2020-07-03.test.txt
- src_alpha3: spa
- tgt_alpha3: ara
- short_pair: es-ar
- chrF2_score: 0.517
- bleu: 20.0
- brevity_penalty: 0.9390000000000001
- ref_len: 7547.0
- src_name: Spanish
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: es
- tgt_alpha2: ar
- prefer_old: False
- long_pair: spa-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["es", "ar"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-es-ar | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es",
"ar"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #es #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### spa-ara
* source group: Spanish
* target group: Arabic
* OPUS readme: spa-ara
* model: transformer
* source language(s): spa
* target language(s): apc apc\_Latn ara arq
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 20.0, chr-F: 0.517
### System Info:
* hf\_name: spa-ara
* source\_languages: spa
* target\_languages: ara
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['es', 'ar']
* src\_constituents: {'spa'}
* tgt\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: spa
* tgt\_alpha3: ara
* short\_pair: es-ar
* chrF2\_score: 0.517
* bleu: 20.0
* brevity\_penalty: 0.9390000000000001
* ref\_len: 7547.0
* src\_name: Spanish
* tgt\_name: Arabic
* train\_date: 2020-07-03
* src\_alpha2: es
* tgt\_alpha2: ar
* prefer\_old: False
* long\_pair: spa-ara
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### spa-ara\n\n\n* source group: Spanish\n* target group: Arabic\n* OPUS readme: spa-ara\n* model: transformer\n* source language(s): spa\n* target language(s): apc apc\\_Latn ara arq\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.0, chr-F: 0.517",
"### System Info:\n\n\n* hf\\_name: spa-ara\n* source\\_languages: spa\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['es', 'ar']\n* src\\_constituents: {'spa'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: spa\n* tgt\\_alpha3: ara\n* short\\_pair: es-ar\n* chrF2\\_score: 0.517\n* bleu: 20.0\n* brevity\\_penalty: 0.9390000000000001\n* ref\\_len: 7547.0\n* src\\_name: Spanish\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: es\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: spa-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### spa-ara\n\n\n* source group: Spanish\n* target group: Arabic\n* OPUS readme: spa-ara\n* model: transformer\n* source language(s): spa\n* target language(s): apc apc\\_Latn ara arq\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.0, chr-F: 0.517",
"### System Info:\n\n\n* hf\\_name: spa-ara\n* source\\_languages: spa\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['es', 'ar']\n* src\\_constituents: {'spa'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: spa\n* tgt\\_alpha3: ara\n* short\\_pair: es-ar\n* chrF2\\_score: 0.517\n* bleu: 20.0\n* brevity\\_penalty: 0.9390000000000001\n* ref\\_len: 7547.0\n* src\\_name: Spanish\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: es\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: spa-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
51,
165,
445
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### spa-ara\n\n\n* source group: Spanish\n* target group: Arabic\n* OPUS readme: spa-ara\n* model: transformer\n* source language(s): spa\n* target language(s): apc apc\\_Latn ara arq\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.0, chr-F: 0.517### System Info:\n\n\n* hf\\_name: spa-ara\n* source\\_languages: spa\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['es', 'ar']\n* src\\_constituents: {'spa'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: spa\n* tgt\\_alpha3: ara\n* short\\_pair: es-ar\n* chrF2\\_score: 0.517\n* bleu: 20.0\n* brevity\\_penalty: 0.9390000000000001\n* ref\\_len: 7547.0\n* src\\_name: Spanish\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: es\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: spa-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-es-ase
* source languages: es
* target languages: ase
* OPUS readme: [es-ase](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ase/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ase/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ase/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ase/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.ase | 31.5 | 0.488 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-es-ase | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ase",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #es #ase #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-es-ase
* source languages: es
* target languages: ase
* OPUS readme: es-ase
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 31.5, chr-F: 0.488
| [
"### opus-mt-es-ase\n\n\n* source languages: es\n* target languages: ase\n* OPUS readme: es-ase\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.5, chr-F: 0.488"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #ase #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-es-ase\n\n\n* source languages: es\n* target languages: ase\n* OPUS readme: es-ase\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.5, chr-F: 0.488"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #ase #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-es-ase\n\n\n* source languages: es\n* target languages: ase\n* OPUS readme: es-ase\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.5, chr-F: 0.488"
] |
translation | transformers |
### opus-mt-es-bcl
* source languages: es
* target languages: bcl
* OPUS readme: [es-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-bcl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-bcl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bcl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bcl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.bcl | 37.1 | 0.586 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-es-bcl | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"bcl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #es #bcl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-es-bcl
* source languages: es
* target languages: bcl
* OPUS readme: es-bcl
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 37.1, chr-F: 0.586
| [
"### opus-mt-es-bcl\n\n\n* source languages: es\n* target languages: bcl\n* OPUS readme: es-bcl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.1, chr-F: 0.586"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #bcl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-es-bcl\n\n\n* source languages: es\n* target languages: bcl\n* OPUS readme: es-bcl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.1, chr-F: 0.586"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #bcl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-es-bcl\n\n\n* source languages: es\n* target languages: bcl\n* OPUS readme: es-bcl\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.1, chr-F: 0.586"
] |
translation | transformers |
### opus-mt-es-ber
* source languages: es
* target languages: ber
* OPUS readme: [es-ber](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ber/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ber/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.es.ber | 21.8 | 0.444 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-es-ber | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"ber",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #es #ber #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-es-ber
* source languages: es
* target languages: ber
* OPUS readme: es-ber
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 21.8, chr-F: 0.444
| [
"### opus-mt-es-ber\n\n\n* source languages: es\n* target languages: ber\n* OPUS readme: es-ber\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.444"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #ber #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-es-ber\n\n\n* source languages: es\n* target languages: ber\n* OPUS readme: es-ber\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.444"
] | [
52,
109
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #ber #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-es-ber\n\n\n* source languages: es\n* target languages: ber\n* OPUS readme: es-ber\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.444"
] |
translation | transformers |
### spa-bul
* source group: Spanish
* target group: Bulgarian
* OPUS readme: [spa-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-bul/README.md)
* model: transformer
* source language(s): spa
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.bul | 50.9 | 0.674 |
### System Info:
- hf_name: spa-bul
- source_languages: spa
- target_languages: bul
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-bul/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'bg']
- src_constituents: {'spa'}
- tgt_constituents: {'bul', 'bul_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-bul/opus-2020-07-03.test.txt
- src_alpha3: spa
- tgt_alpha3: bul
- short_pair: es-bg
- chrF2_score: 0.674
- bleu: 50.9
- brevity_penalty: 0.955
- ref_len: 1707.0
- src_name: Spanish
- tgt_name: Bulgarian
- train_date: 2020-07-03
- src_alpha2: es
- tgt_alpha2: bg
- prefer_old: False
- long_pair: spa-bul
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | {"language": ["es", "bg"], "license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-es-bg | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"bg",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"es",
"bg"
] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #es #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### spa-bul
* source group: Spanish
* target group: Bulgarian
* OPUS readme: spa-bul
* model: transformer
* source language(s): spa
* target language(s): bul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 50.9, chr-F: 0.674
### System Info:
* hf\_name: spa-bul
* source\_languages: spa
* target\_languages: bul
* opus\_readme\_url: URL
* original\_repo: Tatoeba-Challenge
* tags: ['translation']
* languages: ['es', 'bg']
* src\_constituents: {'spa'}
* tgt\_constituents: {'bul', 'bul\_Latn'}
* src\_multilingual: False
* tgt\_multilingual: False
* prepro: normalization + SentencePiece (spm32k,spm32k)
* url\_model: URL
* url\_test\_set: URL
* src\_alpha3: spa
* tgt\_alpha3: bul
* short\_pair: es-bg
* chrF2\_score: 0.674
* bleu: 50.9
* brevity\_penalty: 0.955
* ref\_len: 1707.0
* src\_name: Spanish
* tgt\_name: Bulgarian
* train\_date: 2020-07-03
* src\_alpha2: es
* tgt\_alpha2: bg
* prefer\_old: False
* long\_pair: spa-bul
* helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
* transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
* port\_machine: brutasse
* port\_time: 2020-08-21-14:41
| [
"### spa-bul\n\n\n* source group: Spanish\n* target group: Bulgarian\n* OPUS readme: spa-bul\n* model: transformer\n* source language(s): spa\n* target language(s): bul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.9, chr-F: 0.674",
"### System Info:\n\n\n* hf\\_name: spa-bul\n* source\\_languages: spa\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['es', 'bg']\n* src\\_constituents: {'spa'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: spa\n* tgt\\_alpha3: bul\n* short\\_pair: es-bg\n* chrF2\\_score: 0.674\n* bleu: 50.9\n* brevity\\_penalty: 0.955\n* ref\\_len: 1707.0\n* src\\_name: Spanish\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: es\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: spa-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### spa-bul\n\n\n* source group: Spanish\n* target group: Bulgarian\n* OPUS readme: spa-bul\n* model: transformer\n* source language(s): spa\n* target language(s): bul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.9, chr-F: 0.674",
"### System Info:\n\n\n* hf\\_name: spa-bul\n* source\\_languages: spa\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['es', 'bg']\n* src\\_constituents: {'spa'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: spa\n* tgt\\_alpha3: bul\n* short\\_pair: es-bg\n* chrF2\\_score: 0.674\n* bleu: 50.9\n* brevity\\_penalty: 0.955\n* ref\\_len: 1707.0\n* src\\_name: Spanish\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: es\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: spa-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] | [
52,
130,
408
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### spa-bul\n\n\n* source group: Spanish\n* target group: Bulgarian\n* OPUS readme: spa-bul\n* model: transformer\n* source language(s): spa\n* target language(s): bul\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.9, chr-F: 0.674### System Info:\n\n\n* hf\\_name: spa-bul\n* source\\_languages: spa\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['es', 'bg']\n* src\\_constituents: {'spa'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: spa\n* tgt\\_alpha3: bul\n* short\\_pair: es-bg\n* chrF2\\_score: 0.674\n* bleu: 50.9\n* brevity\\_penalty: 0.955\n* ref\\_len: 1707.0\n* src\\_name: Spanish\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-07-03\n* src\\_alpha2: es\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: spa-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41"
] |
translation | transformers |
### opus-mt-es-bi
* source languages: es
* target languages: bi
* OPUS readme: [es-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-bi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-bi/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bi/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bi/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.bi | 28.0 | 0.473 |
| {"license": "apache-2.0", "tags": ["translation"]} | Helsinki-NLP/opus-mt-es-bi | null | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"bi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tf #marian #text2text-generation #translation #es #bi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
| ### opus-mt-es-bi
* source languages: es
* target languages: bi
* OPUS readme: es-bi
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: URL
* test set translations: URL
* test set scores: URL
Benchmarks
----------
testset: URL, BLEU: 28.0, chr-F: 0.473
| [
"### opus-mt-es-bi\n\n\n* source languages: es\n* target languages: bi\n* OPUS readme: es-bi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.0, chr-F: 0.473"
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #bi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### opus-mt-es-bi\n\n\n* source languages: es\n* target languages: bi\n* OPUS readme: es-bi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.0, chr-F: 0.473"
] | [
51,
106
] | [
"TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #es #bi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-es-bi\n\n\n* source languages: es\n* target languages: bi\n* OPUS readme: es-bi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.0, chr-F: 0.473"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.