pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
tokens_length
sequencelengths
1
723
input_texts
sequencelengths
1
1
translation
transformers
### opus-mt-lus-sv * source languages: lus * target languages: sv * OPUS readme: [lus-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lus.sv | 25.5 | 0.439 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lus-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lus", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lus #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lus-sv * source languages: lus * target languages: sv * OPUS readme: lus-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.5, chr-F: 0.439
[ "### opus-mt-lus-sv\n\n\n* source languages: lus\n* target languages: sv\n* OPUS readme: lus-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.439" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lus #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lus-sv\n\n\n* source languages: lus\n* target languages: sv\n* OPUS readme: lus-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.439" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lus #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lus-sv\n\n\n* source languages: lus\n* target languages: sv\n* OPUS readme: lus-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.439" ]
translation
transformers
### opus-mt-lv-en * source languages: lv * target languages: en * OPUS readme: [lv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2017-enlv.lv.en | 29.9 | 0.587 | | newstest2017-enlv.lv.en | 22.1 | 0.526 | | Tatoeba.lv.en | 53.3 | 0.707 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lv-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lv", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lv #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lv-en * source languages: lv * target languages: en * OPUS readme: lv-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 29.9, chr-F: 0.587 testset: URL, BLEU: 22.1, chr-F: 0.526 testset: URL, BLEU: 53.3, chr-F: 0.707
[ "### opus-mt-lv-en\n\n\n* source languages: lv\n* target languages: en\n* OPUS readme: lv-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.9, chr-F: 0.587\ntestset: URL, BLEU: 22.1, chr-F: 0.526\ntestset: URL, BLEU: 53.3, chr-F: 0.707" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lv #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lv-en\n\n\n* source languages: lv\n* target languages: en\n* OPUS readme: lv-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.9, chr-F: 0.587\ntestset: URL, BLEU: 22.1, chr-F: 0.526\ntestset: URL, BLEU: 53.3, chr-F: 0.707" ]
[ 52, 155 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lv #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lv-en\n\n\n* source languages: lv\n* target languages: en\n* OPUS readme: lv-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.9, chr-F: 0.587\ntestset: URL, BLEU: 22.1, chr-F: 0.526\ntestset: URL, BLEU: 53.3, chr-F: 0.707" ]
translation
transformers
### opus-mt-lv-es * source languages: lv * target languages: es * OPUS readme: [lv-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lv.es | 21.7 | 0.433 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lv-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lv", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lv #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lv-es * source languages: lv * target languages: es * OPUS readme: lv-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 21.7, chr-F: 0.433
[ "### opus-mt-lv-es\n\n\n* source languages: lv\n* target languages: es\n* OPUS readme: lv-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.433" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lv #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lv-es\n\n\n* source languages: lv\n* target languages: es\n* OPUS readme: lv-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.433" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lv #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lv-es\n\n\n* source languages: lv\n* target languages: es\n* OPUS readme: lv-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.433" ]
translation
transformers
### opus-mt-lv-fi * source languages: lv * target languages: fi * OPUS readme: [lv-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-fi/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lv.fi | 20.6 | 0.469 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lv-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lv", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lv #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lv-fi * source languages: lv * target languages: fi * OPUS readme: lv-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.6, chr-F: 0.469
[ "### opus-mt-lv-fi\n\n\n* source languages: lv\n* target languages: fi\n* OPUS readme: lv-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.6, chr-F: 0.469" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lv #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lv-fi\n\n\n* source languages: lv\n* target languages: fi\n* OPUS readme: lv-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.6, chr-F: 0.469" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lv #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lv-fi\n\n\n* source languages: lv\n* target languages: fi\n* OPUS readme: lv-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.6, chr-F: 0.469" ]
translation
transformers
### opus-mt-lv-fr * source languages: lv * target languages: fr * OPUS readme: [lv-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lv.fr | 22.1 | 0.437 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lv-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lv", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lv #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lv-fr * source languages: lv * target languages: fr * OPUS readme: lv-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.1, chr-F: 0.437
[ "### opus-mt-lv-fr\n\n\n* source languages: lv\n* target languages: fr\n* OPUS readme: lv-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.437" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lv #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lv-fr\n\n\n* source languages: lv\n* target languages: fr\n* OPUS readme: lv-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.437" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lv #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lv-fr\n\n\n* source languages: lv\n* target languages: fr\n* OPUS readme: lv-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.437" ]
translation
transformers
### lav-rus * source group: Latvian * target group: Russian * OPUS readme: [lav-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lav-rus/README.md) * model: transformer-align * source language(s): lav * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.lav.rus | 53.3 | 0.702 | ### System Info: - hf_name: lav-rus - source_languages: lav - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lav-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lv', 'ru'] - src_constituents: {'lav'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lav-rus/opus-2020-06-17.test.txt - src_alpha3: lav - tgt_alpha3: rus - short_pair: lv-ru - chrF2_score: 0.7020000000000001 - bleu: 53.3 - brevity_penalty: 0.9840000000000001 - ref_len: 1541.0 - src_name: Latvian - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: lv - tgt_alpha2: ru - prefer_old: False - long_pair: lav-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["lv", "ru"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lv-ru
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lv", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lv", "ru" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lv #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### lav-rus * source group: Latvian * target group: Russian * OPUS readme: lav-rus * model: transformer-align * source language(s): lav * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 53.3, chr-F: 0.702 ### System Info: * hf\_name: lav-rus * source\_languages: lav * target\_languages: rus * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['lv', 'ru'] * src\_constituents: {'lav'} * tgt\_constituents: {'rus'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: lav * tgt\_alpha3: rus * short\_pair: lv-ru * chrF2\_score: 0.7020000000000001 * bleu: 53.3 * brevity\_penalty: 0.9840000000000001 * ref\_len: 1541.0 * src\_name: Latvian * tgt\_name: Russian * train\_date: 2020-06-17 * src\_alpha2: lv * tgt\_alpha2: ru * prefer\_old: False * long\_pair: lav-rus * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### lav-rus\n\n\n* source group: Latvian\n* target group: Russian\n* OPUS readme: lav-rus\n* model: transformer-align\n* source language(s): lav\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.3, chr-F: 0.702", "### System Info:\n\n\n* hf\\_name: lav-rus\n* source\\_languages: lav\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lv', 'ru']\n* src\\_constituents: {'lav'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lav\n* tgt\\_alpha3: rus\n* short\\_pair: lv-ru\n* chrF2\\_score: 0.7020000000000001\n* bleu: 53.3\n* brevity\\_penalty: 0.9840000000000001\n* ref\\_len: 1541.0\n* src\\_name: Latvian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: lv\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: lav-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lv #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### lav-rus\n\n\n* source group: Latvian\n* target group: Russian\n* OPUS readme: lav-rus\n* model: transformer-align\n* source language(s): lav\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.3, chr-F: 0.702", "### System Info:\n\n\n* hf\\_name: lav-rus\n* source\\_languages: lav\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lv', 'ru']\n* src\\_constituents: {'lav'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lav\n* tgt\\_alpha3: rus\n* short\\_pair: lv-ru\n* chrF2\\_score: 0.7020000000000001\n* bleu: 53.3\n* brevity\\_penalty: 0.9840000000000001\n* ref\\_len: 1541.0\n* src\\_name: Latvian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: lv\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: lav-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 52, 134, 411 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lv #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### lav-rus\n\n\n* source group: Latvian\n* target group: Russian\n* OPUS readme: lav-rus\n* model: transformer-align\n* source language(s): lav\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.3, chr-F: 0.702### System Info:\n\n\n* hf\\_name: lav-rus\n* source\\_languages: lav\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lv', 'ru']\n* src\\_constituents: {'lav'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lav\n* tgt\\_alpha3: rus\n* short\\_pair: lv-ru\n* chrF2\\_score: 0.7020000000000001\n* bleu: 53.3\n* brevity\\_penalty: 0.9840000000000001\n* ref\\_len: 1541.0\n* src\\_name: Latvian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: lv\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: lav-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-lv-sv * source languages: lv * target languages: sv * OPUS readme: [lv-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lv.sv | 22.0 | 0.444 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lv-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lv", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lv #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lv-sv * source languages: lv * target languages: sv * OPUS readme: lv-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.0, chr-F: 0.444
[ "### opus-mt-lv-sv\n\n\n* source languages: lv\n* target languages: sv\n* OPUS readme: lv-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.444" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lv #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lv-sv\n\n\n* source languages: lv\n* target languages: sv\n* OPUS readme: lv-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.444" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lv #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lv-sv\n\n\n* source languages: lv\n* target languages: sv\n* OPUS readme: lv-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.444" ]
translation
transformers
### opus-mt-mfe-en * source languages: mfe * target languages: en * OPUS readme: [mfe-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mfe-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/mfe-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfe-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfe-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mfe.en | 39.9 | 0.552 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mfe-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mfe", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mfe #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mfe-en * source languages: mfe * target languages: en * OPUS readme: mfe-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 39.9, chr-F: 0.552
[ "### opus-mt-mfe-en\n\n\n* source languages: mfe\n* target languages: en\n* OPUS readme: mfe-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.9, chr-F: 0.552" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mfe #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mfe-en\n\n\n* source languages: mfe\n* target languages: en\n* OPUS readme: mfe-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.9, chr-F: 0.552" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mfe #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mfe-en\n\n\n* source languages: mfe\n* target languages: en\n* OPUS readme: mfe-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.9, chr-F: 0.552" ]
translation
transformers
### opus-mt-mfe-es * source languages: mfe * target languages: es * OPUS readme: [mfe-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mfe-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mfe-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfe-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfe-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mfe.es | 24.0 | 0.418 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mfe-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mfe", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mfe #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mfe-es * source languages: mfe * target languages: es * OPUS readme: mfe-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.0, chr-F: 0.418
[ "### opus-mt-mfe-es\n\n\n* source languages: mfe\n* target languages: es\n* OPUS readme: mfe-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.0, chr-F: 0.418" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mfe #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mfe-es\n\n\n* source languages: mfe\n* target languages: es\n* OPUS readme: mfe-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.0, chr-F: 0.418" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mfe #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mfe-es\n\n\n* source languages: mfe\n* target languages: es\n* OPUS readme: mfe-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.0, chr-F: 0.418" ]
translation
transformers
### opus-mt-mfs-es * source languages: mfs * target languages: es * OPUS readme: [mfs-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mfs-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mfs-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfs-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mfs-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mfs.es | 88.9 | 0.910 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mfs-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mfs", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mfs #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mfs-es * source languages: mfs * target languages: es * OPUS readme: mfs-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 88.9, chr-F: 0.910
[ "### opus-mt-mfs-es\n\n\n* source languages: mfs\n* target languages: es\n* OPUS readme: mfs-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 88.9, chr-F: 0.910" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mfs #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mfs-es\n\n\n* source languages: mfs\n* target languages: es\n* OPUS readme: mfs-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 88.9, chr-F: 0.910" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mfs #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mfs-es\n\n\n* source languages: mfs\n* target languages: es\n* OPUS readme: mfs-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 88.9, chr-F: 0.910" ]
translation
transformers
### opus-mt-mg-en * source languages: mg * target languages: en * OPUS readme: [mg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mg-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/mg-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mg-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mg-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | GlobalVoices.mg.en | 27.6 | 0.522 | | Tatoeba.mg.en | 50.2 | 0.607 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mg-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mg", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mg-en * source languages: mg * target languages: en * OPUS readme: mg-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.6, chr-F: 0.522 testset: URL, BLEU: 50.2, chr-F: 0.607
[ "### opus-mt-mg-en\n\n\n* source languages: mg\n* target languages: en\n* OPUS readme: mg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.522\ntestset: URL, BLEU: 50.2, chr-F: 0.607" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mg-en\n\n\n* source languages: mg\n* target languages: en\n* OPUS readme: mg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.522\ntestset: URL, BLEU: 50.2, chr-F: 0.607" ]
[ 51, 129 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mg-en\n\n\n* source languages: mg\n* target languages: en\n* OPUS readme: mg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.522\ntestset: URL, BLEU: 50.2, chr-F: 0.607" ]
translation
transformers
### opus-mt-mg-es * source languages: mg * target languages: es * OPUS readme: [mg-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mg-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mg-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mg-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mg-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | GlobalVoices.mg.es | 23.1 | 0.480 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mg-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mg", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mg-es * source languages: mg * target languages: es * OPUS readme: mg-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.1, chr-F: 0.480
[ "### opus-mt-mg-es\n\n\n* source languages: mg\n* target languages: es\n* OPUS readme: mg-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.480" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mg-es\n\n\n* source languages: mg\n* target languages: es\n* OPUS readme: mg-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.480" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mg-es\n\n\n* source languages: mg\n* target languages: es\n* OPUS readme: mg-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.480" ]
translation
transformers
### opus-mt-mh-en * source languages: mh * target languages: en * OPUS readme: [mh-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mh-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/mh-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mh.en | 36.5 | 0.505 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mh-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mh", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mh #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mh-en * source languages: mh * target languages: en * OPUS readme: mh-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 36.5, chr-F: 0.505
[ "### opus-mt-mh-en\n\n\n* source languages: mh\n* target languages: en\n* OPUS readme: mh-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.5, chr-F: 0.505" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mh #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mh-en\n\n\n* source languages: mh\n* target languages: en\n* OPUS readme: mh-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.5, chr-F: 0.505" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mh #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mh-en\n\n\n* source languages: mh\n* target languages: en\n* OPUS readme: mh-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.5, chr-F: 0.505" ]
translation
transformers
### opus-mt-mh-es * source languages: mh * target languages: es * OPUS readme: [mh-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mh-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mh-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mh.es | 23.6 | 0.407 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mh-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mh", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mh #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mh-es * source languages: mh * target languages: es * OPUS readme: mh-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.6, chr-F: 0.407
[ "### opus-mt-mh-es\n\n\n* source languages: mh\n* target languages: es\n* OPUS readme: mh-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.6, chr-F: 0.407" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mh #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mh-es\n\n\n* source languages: mh\n* target languages: es\n* OPUS readme: mh-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.6, chr-F: 0.407" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mh #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mh-es\n\n\n* source languages: mh\n* target languages: es\n* OPUS readme: mh-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.6, chr-F: 0.407" ]
translation
transformers
### opus-mt-mh-fi * source languages: mh * target languages: fi * OPUS readme: [mh-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mh-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/mh-fi/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-fi/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mh-fi/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mh.fi | 23.3 | 0.442 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mh-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mh", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mh #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mh-fi * source languages: mh * target languages: fi * OPUS readme: mh-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.3, chr-F: 0.442
[ "### opus-mt-mh-fi\n\n\n* source languages: mh\n* target languages: fi\n* OPUS readme: mh-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.442" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mh #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mh-fi\n\n\n* source languages: mh\n* target languages: fi\n* OPUS readme: mh-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.442" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mh #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mh-fi\n\n\n* source languages: mh\n* target languages: fi\n* OPUS readme: mh-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.442" ]
translation
transformers
### opus-mt-mk-en * source languages: mk * target languages: en * OPUS readme: [mk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mk-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.mk.en | 59.8 | 0.720 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mk-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mk", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mk-en * source languages: mk * target languages: en * OPUS readme: mk-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 59.8, chr-F: 0.720
[ "### opus-mt-mk-en\n\n\n* source languages: mk\n* target languages: en\n* OPUS readme: mk-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 59.8, chr-F: 0.720" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mk-en\n\n\n* source languages: mk\n* target languages: en\n* OPUS readme: mk-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 59.8, chr-F: 0.720" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mk-en\n\n\n* source languages: mk\n* target languages: en\n* OPUS readme: mk-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 59.8, chr-F: 0.720" ]
translation
transformers
### mkd-spa * source group: Macedonian * target group: Spanish * OPUS readme: [mkd-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkd-spa/README.md) * model: transformer-align * source language(s): mkd * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.mkd.spa | 56.5 | 0.717 | ### System Info: - hf_name: mkd-spa - source_languages: mkd - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkd-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['mk', 'es'] - src_constituents: {'mkd'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mkd-spa/opus-2020-06-17.test.txt - src_alpha3: mkd - tgt_alpha3: spa - short_pair: mk-es - chrF2_score: 0.7170000000000001 - bleu: 56.5 - brevity_penalty: 0.997 - ref_len: 1121.0 - src_name: Macedonian - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: mk - tgt_alpha2: es - prefer_old: False - long_pair: mkd-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["mk", "es"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mk-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mk", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "mk", "es" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mk #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### mkd-spa * source group: Macedonian * target group: Spanish * OPUS readme: mkd-spa * model: transformer-align * source language(s): mkd * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 56.5, chr-F: 0.717 ### System Info: * hf\_name: mkd-spa * source\_languages: mkd * target\_languages: spa * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['mk', 'es'] * src\_constituents: {'mkd'} * tgt\_constituents: {'spa'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: mkd * tgt\_alpha3: spa * short\_pair: mk-es * chrF2\_score: 0.7170000000000001 * bleu: 56.5 * brevity\_penalty: 0.997 * ref\_len: 1121.0 * src\_name: Macedonian * tgt\_name: Spanish * train\_date: 2020-06-17 * src\_alpha2: mk * tgt\_alpha2: es * prefer\_old: False * long\_pair: mkd-spa * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### mkd-spa\n\n\n* source group: Macedonian\n* target group: Spanish\n* OPUS readme: mkd-spa\n* model: transformer-align\n* source language(s): mkd\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.5, chr-F: 0.717", "### System Info:\n\n\n* hf\\_name: mkd-spa\n* source\\_languages: mkd\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['mk', 'es']\n* src\\_constituents: {'mkd'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: mkd\n* tgt\\_alpha3: spa\n* short\\_pair: mk-es\n* chrF2\\_score: 0.7170000000000001\n* bleu: 56.5\n* brevity\\_penalty: 0.997\n* ref\\_len: 1121.0\n* src\\_name: Macedonian\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: mk\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: mkd-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mk #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### mkd-spa\n\n\n* source group: Macedonian\n* target group: Spanish\n* OPUS readme: mkd-spa\n* model: transformer-align\n* source language(s): mkd\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.5, chr-F: 0.717", "### System Info:\n\n\n* hf\\_name: mkd-spa\n* source\\_languages: mkd\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['mk', 'es']\n* src\\_constituents: {'mkd'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: mkd\n* tgt\\_alpha3: spa\n* short\\_pair: mk-es\n* chrF2\\_score: 0.7170000000000001\n* bleu: 56.5\n* brevity\\_penalty: 0.997\n* ref\\_len: 1121.0\n* src\\_name: Macedonian\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: mk\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: mkd-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 402 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mk #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### mkd-spa\n\n\n* source group: Macedonian\n* target group: Spanish\n* OPUS readme: mkd-spa\n* model: transformer-align\n* source language(s): mkd\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.5, chr-F: 0.717### System Info:\n\n\n* hf\\_name: mkd-spa\n* source\\_languages: mkd\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['mk', 'es']\n* src\\_constituents: {'mkd'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: mkd\n* tgt\\_alpha3: spa\n* short\\_pair: mk-es\n* chrF2\\_score: 0.7170000000000001\n* bleu: 56.5\n* brevity\\_penalty: 0.997\n* ref\\_len: 1121.0\n* src\\_name: Macedonian\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: mk\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: mkd-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-mk-fi * source languages: mk * target languages: fi * OPUS readme: [mk-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mk-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/mk-fi/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-fi/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-fi/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mk.fi | 25.9 | 0.498 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mk-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mk", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mk #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mk-fi * source languages: mk * target languages: fi * OPUS readme: mk-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.9, chr-F: 0.498
[ "### opus-mt-mk-fi\n\n\n* source languages: mk\n* target languages: fi\n* OPUS readme: mk-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.9, chr-F: 0.498" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mk #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mk-fi\n\n\n* source languages: mk\n* target languages: fi\n* OPUS readme: mk-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.9, chr-F: 0.498" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mk #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mk-fi\n\n\n* source languages: mk\n* target languages: fi\n* OPUS readme: mk-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.9, chr-F: 0.498" ]
translation
transformers
### opus-mt-mk-fr * source languages: mk * target languages: fr * OPUS readme: [mk-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mk-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mk-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | GlobalVoices.mk.fr | 22.3 | 0.492 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mk-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mk", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mk #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mk-fr * source languages: mk * target languages: fr * OPUS readme: mk-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.3, chr-F: 0.492
[ "### opus-mt-mk-fr\n\n\n* source languages: mk\n* target languages: fr\n* OPUS readme: mk-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.3, chr-F: 0.492" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mk #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mk-fr\n\n\n* source languages: mk\n* target languages: fr\n* OPUS readme: mk-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.3, chr-F: 0.492" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mk #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mk-fr\n\n\n* source languages: mk\n* target languages: fr\n* OPUS readme: mk-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.3, chr-F: 0.492" ]
translation
transformers
### mkh-eng * source group: Mon-Khmer languages * target group: English * OPUS readme: [mkh-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkh-eng/README.md) * model: transformer * source language(s): kha khm khm_Latn mnw vie vie_Hani * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.zip) * test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.test.txt) * test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kha-eng.kha.eng | 0.5 | 0.108 | | Tatoeba-test.khm-eng.khm.eng | 8.5 | 0.206 | | Tatoeba-test.mnw-eng.mnw.eng | 0.7 | 0.110 | | Tatoeba-test.multi.eng | 24.5 | 0.407 | | Tatoeba-test.vie-eng.vie.eng | 34.4 | 0.529 | ### System Info: - hf_name: mkh-eng - source_languages: mkh - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mkh-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'km', 'mkh', 'en'] - src_constituents: {'vie_Hani', 'mnw', 'vie', 'kha', 'khm_Latn', 'khm'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mkh-eng/opus-2020-07-27.test.txt - src_alpha3: mkh - tgt_alpha3: eng - short_pair: mkh-en - chrF2_score: 0.40700000000000003 - bleu: 24.5 - brevity_penalty: 1.0 - ref_len: 33985.0 - src_name: Mon-Khmer languages - tgt_name: English - train_date: 2020-07-27 - src_alpha2: mkh - tgt_alpha2: en - prefer_old: False - long_pair: mkh-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["vi", "km", "mkh", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mkh-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vi", "km", "mkh", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "vi", "km", "mkh", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #vi #km #mkh #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### mkh-eng * source group: Mon-Khmer languages * target group: English * OPUS readme: mkh-eng * model: transformer * source language(s): kha khm khm\_Latn mnw vie vie\_Hani * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 0.5, chr-F: 0.108 testset: URL, BLEU: 8.5, chr-F: 0.206 testset: URL, BLEU: 0.7, chr-F: 0.110 testset: URL, BLEU: 24.5, chr-F: 0.407 testset: URL, BLEU: 34.4, chr-F: 0.529 ### System Info: * hf\_name: mkh-eng * source\_languages: mkh * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['vi', 'km', 'mkh', 'en'] * src\_constituents: {'vie\_Hani', 'mnw', 'vie', 'kha', 'khm\_Latn', 'khm'} * tgt\_constituents: {'eng'} * src\_multilingual: True * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: mkh * tgt\_alpha3: eng * short\_pair: mkh-en * chrF2\_score: 0.40700000000000003 * bleu: 24.5 * brevity\_penalty: 1.0 * ref\_len: 33985.0 * src\_name: Mon-Khmer languages * tgt\_name: English * train\_date: 2020-07-27 * src\_alpha2: mkh * tgt\_alpha2: en * prefer\_old: False * long\_pair: mkh-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### mkh-eng\n\n\n* source group: Mon-Khmer languages\n* target group: English\n* OPUS readme: mkh-eng\n* model: transformer\n* source language(s): kha khm khm\\_Latn mnw vie vie\\_Hani\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.5, chr-F: 0.108\ntestset: URL, BLEU: 8.5, chr-F: 0.206\ntestset: URL, BLEU: 0.7, chr-F: 0.110\ntestset: URL, BLEU: 24.5, chr-F: 0.407\ntestset: URL, BLEU: 34.4, chr-F: 0.529", "### System Info:\n\n\n* hf\\_name: mkh-eng\n* source\\_languages: mkh\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'km', 'mkh', 'en']\n* src\\_constituents: {'vie\\_Hani', 'mnw', 'vie', 'kha', 'khm\\_Latn', 'khm'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: mkh\n* tgt\\_alpha3: eng\n* short\\_pair: mkh-en\n* chrF2\\_score: 0.40700000000000003\n* bleu: 24.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 33985.0\n* src\\_name: Mon-Khmer languages\n* tgt\\_name: English\n* train\\_date: 2020-07-27\n* src\\_alpha2: mkh\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: mkh-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #km #mkh #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### mkh-eng\n\n\n* source group: Mon-Khmer languages\n* target group: English\n* OPUS readme: mkh-eng\n* model: transformer\n* source language(s): kha khm khm\\_Latn mnw vie vie\\_Hani\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.5, chr-F: 0.108\ntestset: URL, BLEU: 8.5, chr-F: 0.206\ntestset: URL, BLEU: 0.7, chr-F: 0.110\ntestset: URL, BLEU: 24.5, chr-F: 0.407\ntestset: URL, BLEU: 34.4, chr-F: 0.529", "### System Info:\n\n\n* hf\\_name: mkh-eng\n* source\\_languages: mkh\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'km', 'mkh', 'en']\n* src\\_constituents: {'vie\\_Hani', 'mnw', 'vie', 'kha', 'khm\\_Latn', 'khm'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: mkh\n* tgt\\_alpha3: eng\n* short\\_pair: mkh-en\n* chrF2\\_score: 0.40700000000000003\n* bleu: 24.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 33985.0\n* src\\_name: Mon-Khmer languages\n* tgt\\_name: English\n* train\\_date: 2020-07-27\n* src\\_alpha2: mkh\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: mkh-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 56, 238, 448 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #km #mkh #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### mkh-eng\n\n\n* source group: Mon-Khmer languages\n* target group: English\n* OPUS readme: mkh-eng\n* model: transformer\n* source language(s): kha khm khm\\_Latn mnw vie vie\\_Hani\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 0.5, chr-F: 0.108\ntestset: URL, BLEU: 8.5, chr-F: 0.206\ntestset: URL, BLEU: 0.7, chr-F: 0.110\ntestset: URL, BLEU: 24.5, chr-F: 0.407\ntestset: URL, BLEU: 34.4, chr-F: 0.529### System Info:\n\n\n* hf\\_name: mkh-eng\n* source\\_languages: mkh\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'km', 'mkh', 'en']\n* src\\_constituents: {'vie\\_Hani', 'mnw', 'vie', 'kha', 'khm\\_Latn', 'khm'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: mkh\n* tgt\\_alpha3: eng\n* short\\_pair: mkh-en\n* chrF2\\_score: 0.40700000000000003\n* bleu: 24.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 33985.0\n* src\\_name: Mon-Khmer languages\n* tgt\\_name: English\n* train\\_date: 2020-07-27\n* src\\_alpha2: mkh\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: mkh-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-ml-en * source languages: ml * target languages: en * OPUS readme: [ml-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ml-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-04-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ml-en/opus-2020-04-20.zip) * test set translations: [opus-2020-04-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ml-en/opus-2020-04-20.test.txt) * test set scores: [opus-2020-04-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ml-en/opus-2020-04-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ml.en | 42.7 | 0.605 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ml-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ml", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ml #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ml-en * source languages: ml * target languages: en * OPUS readme: ml-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 42.7, chr-F: 0.605
[ "### opus-mt-ml-en\n\n\n* source languages: ml\n* target languages: en\n* OPUS readme: ml-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.7, chr-F: 0.605" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ml #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ml-en\n\n\n* source languages: ml\n* target languages: en\n* OPUS readme: ml-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.7, chr-F: 0.605" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ml #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ml-en\n\n\n* source languages: ml\n* target languages: en\n* OPUS readme: ml-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.7, chr-F: 0.605" ]
translation
transformers
### opus-mt-mos-en * source languages: mos * target languages: en * OPUS readme: [mos-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mos-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/mos-en/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mos-en/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mos-en/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mos.en | 26.1 | 0.408 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mos-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mos", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mos #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mos-en * source languages: mos * target languages: en * OPUS readme: mos-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.1, chr-F: 0.408
[ "### opus-mt-mos-en\n\n\n* source languages: mos\n* target languages: en\n* OPUS readme: mos-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.408" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mos #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mos-en\n\n\n* source languages: mos\n* target languages: en\n* OPUS readme: mos-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.408" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mos #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mos-en\n\n\n* source languages: mos\n* target languages: en\n* OPUS readme: mos-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.408" ]
translation
transformers
### opus-mt-mr-en * source languages: mr * target languages: en * OPUS readme: [mr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mr-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mr-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.mr.en | 38.2 | 0.515 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mr-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mr", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mr #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mr-en * source languages: mr * target languages: en * OPUS readme: mr-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 38.2, chr-F: 0.515
[ "### opus-mt-mr-en\n\n\n* source languages: mr\n* target languages: en\n* OPUS readme: mr-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.515" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mr #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mr-en\n\n\n* source languages: mr\n* target languages: en\n* OPUS readme: mr-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.515" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mr #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mr-en\n\n\n* source languages: mr\n* target languages: en\n* OPUS readme: mr-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.515" ]
translation
transformers
### msa-deu * source group: Malay (macrolanguage) * target group: German * OPUS readme: [msa-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-deu/README.md) * model: transformer-align * source language(s): ind zsm_Latn * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.msa.deu | 36.5 | 0.584 | ### System Info: - hf_name: msa-deu - source_languages: msa - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ms', 'de'] - src_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-deu/opus-2020-06-17.test.txt - src_alpha3: msa - tgt_alpha3: deu - short_pair: ms-de - chrF2_score: 0.584 - bleu: 36.5 - brevity_penalty: 0.966 - ref_len: 4198.0 - src_name: Malay (macrolanguage) - tgt_name: German - train_date: 2020-06-17 - src_alpha2: ms - tgt_alpha2: de - prefer_old: False - long_pair: msa-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ms", "de"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ms-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ms", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ms", "de" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ms #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### msa-deu * source group: Malay (macrolanguage) * target group: German * OPUS readme: msa-deu * model: transformer-align * source language(s): ind zsm\_Latn * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 36.5, chr-F: 0.584 ### System Info: * hf\_name: msa-deu * source\_languages: msa * target\_languages: deu * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ms', 'de'] * src\_constituents: {'zsm\_Latn', 'ind', 'max\_Latn', 'zlm\_Latn', 'min'} * tgt\_constituents: {'deu'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: msa * tgt\_alpha3: deu * short\_pair: ms-de * chrF2\_score: 0.584 * bleu: 36.5 * brevity\_penalty: 0.966 * ref\_len: 4198.0 * src\_name: Malay (macrolanguage) * tgt\_name: German * train\_date: 2020-06-17 * src\_alpha2: ms * tgt\_alpha2: de * prefer\_old: False * long\_pair: msa-deu * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### msa-deu\n\n\n* source group: Malay (macrolanguage)\n* target group: German\n* OPUS readme: msa-deu\n* model: transformer-align\n* source language(s): ind zsm\\_Latn\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.5, chr-F: 0.584", "### System Info:\n\n\n* hf\\_name: msa-deu\n* source\\_languages: msa\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ms', 'de']\n* src\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: msa\n* tgt\\_alpha3: deu\n* short\\_pair: ms-de\n* chrF2\\_score: 0.584\n* bleu: 36.5\n* brevity\\_penalty: 0.966\n* ref\\_len: 4198.0\n* src\\_name: Malay (macrolanguage)\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: ms\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: msa-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ms #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### msa-deu\n\n\n* source group: Malay (macrolanguage)\n* target group: German\n* OPUS readme: msa-deu\n* model: transformer-align\n* source language(s): ind zsm\\_Latn\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.5, chr-F: 0.584", "### System Info:\n\n\n* hf\\_name: msa-deu\n* source\\_languages: msa\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ms', 'de']\n* src\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: msa\n* tgt\\_alpha3: deu\n* short\\_pair: ms-de\n* chrF2\\_score: 0.584\n* bleu: 36.5\n* brevity\\_penalty: 0.966\n* ref\\_len: 4198.0\n* src\\_name: Malay (macrolanguage)\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: ms\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: msa-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 149, 440 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ms #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### msa-deu\n\n\n* source group: Malay (macrolanguage)\n* target group: German\n* OPUS readme: msa-deu\n* model: transformer-align\n* source language(s): ind zsm\\_Latn\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.5, chr-F: 0.584### System Info:\n\n\n* hf\\_name: msa-deu\n* source\\_languages: msa\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ms', 'de']\n* src\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: msa\n* tgt\\_alpha3: deu\n* short\\_pair: ms-de\n* chrF2\\_score: 0.584\n* bleu: 36.5\n* brevity\\_penalty: 0.966\n* ref\\_len: 4198.0\n* src\\_name: Malay (macrolanguage)\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: ms\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: msa-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### msa-fra * source group: Malay (macrolanguage) * target group: French * OPUS readme: [msa-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-fra/README.md) * model: transformer-align * source language(s): ind zsm_Latn * target language(s): fra * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.msa.fra | 43.7 | 0.609 | ### System Info: - hf_name: msa-fra - source_languages: msa - target_languages: fra - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-fra/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ms', 'fr'] - src_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - tgt_constituents: {'fra'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-fra/opus-2020-06-17.test.txt - src_alpha3: msa - tgt_alpha3: fra - short_pair: ms-fr - chrF2_score: 0.609 - bleu: 43.7 - brevity_penalty: 0.9740000000000001 - ref_len: 7808.0 - src_name: Malay (macrolanguage) - tgt_name: French - train_date: 2020-06-17 - src_alpha2: ms - tgt_alpha2: fr - prefer_old: False - long_pair: msa-fra - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ms", "fr"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ms-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ms", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ms", "fr" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ms #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### msa-fra * source group: Malay (macrolanguage) * target group: French * OPUS readme: msa-fra * model: transformer-align * source language(s): ind zsm\_Latn * target language(s): fra * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 43.7, chr-F: 0.609 ### System Info: * hf\_name: msa-fra * source\_languages: msa * target\_languages: fra * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ms', 'fr'] * src\_constituents: {'zsm\_Latn', 'ind', 'max\_Latn', 'zlm\_Latn', 'min'} * tgt\_constituents: {'fra'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: msa * tgt\_alpha3: fra * short\_pair: ms-fr * chrF2\_score: 0.609 * bleu: 43.7 * brevity\_penalty: 0.9740000000000001 * ref\_len: 7808.0 * src\_name: Malay (macrolanguage) * tgt\_name: French * train\_date: 2020-06-17 * src\_alpha2: ms * tgt\_alpha2: fr * prefer\_old: False * long\_pair: msa-fra * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### msa-fra\n\n\n* source group: Malay (macrolanguage)\n* target group: French\n* OPUS readme: msa-fra\n* model: transformer-align\n* source language(s): ind zsm\\_Latn\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.7, chr-F: 0.609", "### System Info:\n\n\n* hf\\_name: msa-fra\n* source\\_languages: msa\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ms', 'fr']\n* src\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: msa\n* tgt\\_alpha3: fra\n* short\\_pair: ms-fr\n* chrF2\\_score: 0.609\n* bleu: 43.7\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 7808.0\n* src\\_name: Malay (macrolanguage)\n* tgt\\_name: French\n* train\\_date: 2020-06-17\n* src\\_alpha2: ms\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: msa-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ms #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### msa-fra\n\n\n* source group: Malay (macrolanguage)\n* target group: French\n* OPUS readme: msa-fra\n* model: transformer-align\n* source language(s): ind zsm\\_Latn\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.7, chr-F: 0.609", "### System Info:\n\n\n* hf\\_name: msa-fra\n* source\\_languages: msa\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ms', 'fr']\n* src\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: msa\n* tgt\\_alpha3: fra\n* short\\_pair: ms-fr\n* chrF2\\_score: 0.609\n* bleu: 43.7\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 7808.0\n* src\\_name: Malay (macrolanguage)\n* tgt\\_name: French\n* train\\_date: 2020-06-17\n* src\\_alpha2: ms\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: msa-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 146, 440 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ms #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### msa-fra\n\n\n* source group: Malay (macrolanguage)\n* target group: French\n* OPUS readme: msa-fra\n* model: transformer-align\n* source language(s): ind zsm\\_Latn\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.7, chr-F: 0.609### System Info:\n\n\n* hf\\_name: msa-fra\n* source\\_languages: msa\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ms', 'fr']\n* src\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: msa\n* tgt\\_alpha3: fra\n* short\\_pair: ms-fr\n* chrF2\\_score: 0.609\n* bleu: 43.7\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 7808.0\n* src\\_name: Malay (macrolanguage)\n* tgt\\_name: French\n* train\\_date: 2020-06-17\n* src\\_alpha2: ms\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: msa-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### msa-ita * source group: Malay (macrolanguage) * target group: Italian * OPUS readme: [msa-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-ita/README.md) * model: transformer-align * source language(s): ind zsm_Latn * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.msa.ita | 37.8 | 0.613 | ### System Info: - hf_name: msa-ita - source_languages: msa - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ms', 'it'] - src_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-ita/opus-2020-06-17.test.txt - src_alpha3: msa - tgt_alpha3: ita - short_pair: ms-it - chrF2_score: 0.613 - bleu: 37.8 - brevity_penalty: 0.995 - ref_len: 2758.0 - src_name: Malay (macrolanguage) - tgt_name: Italian - train_date: 2020-06-17 - src_alpha2: ms - tgt_alpha2: it - prefer_old: False - long_pair: msa-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ms", "it"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ms-it
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ms", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ms", "it" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ms #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### msa-ita * source group: Malay (macrolanguage) * target group: Italian * OPUS readme: msa-ita * model: transformer-align * source language(s): ind zsm\_Latn * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 37.8, chr-F: 0.613 ### System Info: * hf\_name: msa-ita * source\_languages: msa * target\_languages: ita * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ms', 'it'] * src\_constituents: {'zsm\_Latn', 'ind', 'max\_Latn', 'zlm\_Latn', 'min'} * tgt\_constituents: {'ita'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: msa * tgt\_alpha3: ita * short\_pair: ms-it * chrF2\_score: 0.613 * bleu: 37.8 * brevity\_penalty: 0.995 * ref\_len: 2758.0 * src\_name: Malay (macrolanguage) * tgt\_name: Italian * train\_date: 2020-06-17 * src\_alpha2: ms * tgt\_alpha2: it * prefer\_old: False * long\_pair: msa-ita * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### msa-ita\n\n\n* source group: Malay (macrolanguage)\n* target group: Italian\n* OPUS readme: msa-ita\n* model: transformer-align\n* source language(s): ind zsm\\_Latn\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.8, chr-F: 0.613", "### System Info:\n\n\n* hf\\_name: msa-ita\n* source\\_languages: msa\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ms', 'it']\n* src\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: msa\n* tgt\\_alpha3: ita\n* short\\_pair: ms-it\n* chrF2\\_score: 0.613\n* bleu: 37.8\n* brevity\\_penalty: 0.995\n* ref\\_len: 2758.0\n* src\\_name: Malay (macrolanguage)\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ms\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: msa-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ms #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### msa-ita\n\n\n* source group: Malay (macrolanguage)\n* target group: Italian\n* OPUS readme: msa-ita\n* model: transformer-align\n* source language(s): ind zsm\\_Latn\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.8, chr-F: 0.613", "### System Info:\n\n\n* hf\\_name: msa-ita\n* source\\_languages: msa\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ms', 'it']\n* src\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: msa\n* tgt\\_alpha3: ita\n* short\\_pair: ms-it\n* chrF2\\_score: 0.613\n* bleu: 37.8\n* brevity\\_penalty: 0.995\n* ref\\_len: 2758.0\n* src\\_name: Malay (macrolanguage)\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ms\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: msa-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 149, 439 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ms #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### msa-ita\n\n\n* source group: Malay (macrolanguage)\n* target group: Italian\n* OPUS readme: msa-ita\n* model: transformer-align\n* source language(s): ind zsm\\_Latn\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.8, chr-F: 0.613### System Info:\n\n\n* hf\\_name: msa-ita\n* source\\_languages: msa\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ms', 'it']\n* src\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: msa\n* tgt\\_alpha3: ita\n* short\\_pair: ms-it\n* chrF2\\_score: 0.613\n* bleu: 37.8\n* brevity\\_penalty: 0.995\n* ref\\_len: 2758.0\n* src\\_name: Malay (macrolanguage)\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ms\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: msa-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### msa-msa * source group: Malay (macrolanguage) * target group: Malay (macrolanguage) * OPUS readme: [msa-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-msa/README.md) * model: transformer-align * source language(s): ind max_Latn min zlm_Latn zsm_Latn * target language(s): ind max_Latn min zlm_Latn zsm_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.msa.msa | 18.6 | 0.418 | ### System Info: - hf_name: msa-msa - source_languages: msa - target_languages: msa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/msa-msa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ms'] - src_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/msa-msa/opus-2020-06-17.test.txt - src_alpha3: msa - tgt_alpha3: msa - short_pair: ms-ms - chrF2_score: 0.418 - bleu: 18.6 - brevity_penalty: 1.0 - ref_len: 6029.0 - src_name: Malay (macrolanguage) - tgt_name: Malay (macrolanguage) - train_date: 2020-06-17 - src_alpha2: ms - tgt_alpha2: ms - prefer_old: False - long_pair: msa-msa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ms"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ms-ms
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ms", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ms" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### msa-msa * source group: Malay (macrolanguage) * target group: Malay (macrolanguage) * OPUS readme: msa-msa * model: transformer-align * source language(s): ind max\_Latn min zlm\_Latn zsm\_Latn * target language(s): ind max\_Latn min zlm\_Latn zsm\_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 18.6, chr-F: 0.418 ### System Info: * hf\_name: msa-msa * source\_languages: msa * target\_languages: msa * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ms'] * src\_constituents: {'zsm\_Latn', 'ind', 'max\_Latn', 'zlm\_Latn', 'min'} * tgt\_constituents: {'zsm\_Latn', 'ind', 'max\_Latn', 'zlm\_Latn', 'min'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: msa * tgt\_alpha3: msa * short\_pair: ms-ms * chrF2\_score: 0.418 * bleu: 18.6 * brevity\_penalty: 1.0 * ref\_len: 6029.0 * src\_name: Malay (macrolanguage) * tgt\_name: Malay (macrolanguage) * train\_date: 2020-06-17 * src\_alpha2: ms * tgt\_alpha2: ms * prefer\_old: False * long\_pair: msa-msa * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### msa-msa\n\n\n* source group: Malay (macrolanguage)\n* target group: Malay (macrolanguage)\n* OPUS readme: msa-msa\n* model: transformer-align\n* source language(s): ind max\\_Latn min zlm\\_Latn zsm\\_Latn\n* target language(s): ind max\\_Latn min zlm\\_Latn zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.6, chr-F: 0.418", "### System Info:\n\n\n* hf\\_name: msa-msa\n* source\\_languages: msa\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ms']\n* src\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: msa\n* tgt\\_alpha3: msa\n* short\\_pair: ms-ms\n* chrF2\\_score: 0.418\n* bleu: 18.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 6029.0\n* src\\_name: Malay (macrolanguage)\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: ms\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: msa-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### msa-msa\n\n\n* source group: Malay (macrolanguage)\n* target group: Malay (macrolanguage)\n* OPUS readme: msa-msa\n* model: transformer-align\n* source language(s): ind max\\_Latn min zlm\\_Latn zsm\\_Latn\n* target language(s): ind max\\_Latn min zlm\\_Latn zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.6, chr-F: 0.418", "### System Info:\n\n\n* hf\\_name: msa-msa\n* source\\_languages: msa\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ms']\n* src\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: msa\n* tgt\\_alpha3: msa\n* short\\_pair: ms-ms\n* chrF2\\_score: 0.418\n* bleu: 18.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 6029.0\n* src\\_name: Malay (macrolanguage)\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: ms\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: msa-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 49, 216, 472 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### msa-msa\n\n\n* source group: Malay (macrolanguage)\n* target group: Malay (macrolanguage)\n* OPUS readme: msa-msa\n* model: transformer-align\n* source language(s): ind max\\_Latn min zlm\\_Latn zsm\\_Latn\n* target language(s): ind max\\_Latn min zlm\\_Latn zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.6, chr-F: 0.418### System Info:\n\n\n* hf\\_name: msa-msa\n* source\\_languages: msa\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ms']\n* src\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: msa\n* tgt\\_alpha3: msa\n* short\\_pair: ms-ms\n* chrF2\\_score: 0.418\n* bleu: 18.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 6029.0\n* src\\_name: Malay (macrolanguage)\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: ms\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: msa-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-mt-en * source languages: mt * target languages: en * OPUS readme: [mt-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mt.en | 49.0 | 0.655 | | Tatoeba.mt.en | 53.3 | 0.685 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mt-en
null
[ "transformers", "pytorch", "tf", "jax", "marian", "text2text-generation", "translation", "mt", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #jax #marian #text2text-generation #translation #mt #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mt-en * source languages: mt * target languages: en * OPUS readme: mt-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 49.0, chr-F: 0.655 testset: URL, BLEU: 53.3, chr-F: 0.685
[ "### opus-mt-mt-en\n\n\n* source languages: mt\n* target languages: en\n* OPUS readme: mt-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.0, chr-F: 0.655\ntestset: URL, BLEU: 53.3, chr-F: 0.685" ]
[ "TAGS\n#transformers #pytorch #tf #jax #marian #text2text-generation #translation #mt #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mt-en\n\n\n* source languages: mt\n* target languages: en\n* OPUS readme: mt-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.0, chr-F: 0.655\ntestset: URL, BLEU: 53.3, chr-F: 0.685" ]
[ 53, 129 ]
[ "TAGS\n#transformers #pytorch #tf #jax #marian #text2text-generation #translation #mt #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mt-en\n\n\n* source languages: mt\n* target languages: en\n* OPUS readme: mt-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.0, chr-F: 0.655\ntestset: URL, BLEU: 53.3, chr-F: 0.685" ]
translation
transformers
### opus-mt-mt-es * source languages: mt * target languages: es * OPUS readme: [mt-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mt.es | 27.1 | 0.471 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mt-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mt", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mt #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mt-es * source languages: mt * target languages: es * OPUS readme: mt-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.1, chr-F: 0.471
[ "### opus-mt-mt-es\n\n\n* source languages: mt\n* target languages: es\n* OPUS readme: mt-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.1, chr-F: 0.471" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mt #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mt-es\n\n\n* source languages: mt\n* target languages: es\n* OPUS readme: mt-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.1, chr-F: 0.471" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mt #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mt-es\n\n\n* source languages: mt\n* target languages: es\n* OPUS readme: mt-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.1, chr-F: 0.471" ]
translation
transformers
### opus-mt-mt-fi * source languages: mt * target languages: fi * OPUS readme: [mt-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mt.fi | 24.9 | 0.509 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mt-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mt", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mt #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mt-fi * source languages: mt * target languages: fi * OPUS readme: mt-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.9, chr-F: 0.509
[ "### opus-mt-mt-fi\n\n\n* source languages: mt\n* target languages: fi\n* OPUS readme: mt-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.509" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mt #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mt-fi\n\n\n* source languages: mt\n* target languages: fi\n* OPUS readme: mt-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.509" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mt #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mt-fi\n\n\n* source languages: mt\n* target languages: fi\n* OPUS readme: mt-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.509" ]
translation
transformers
### opus-mt-mt-fr * source languages: mt * target languages: fr * OPUS readme: [mt-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mt.fr | 27.2 | 0.475 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mt-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mt", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mt #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mt-fr * source languages: mt * target languages: fr * OPUS readme: mt-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.2, chr-F: 0.475
[ "### opus-mt-mt-fr\n\n\n* source languages: mt\n* target languages: fr\n* OPUS readme: mt-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.475" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mt #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mt-fr\n\n\n* source languages: mt\n* target languages: fr\n* OPUS readme: mt-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.475" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mt #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mt-fr\n\n\n* source languages: mt\n* target languages: fr\n* OPUS readme: mt-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.2, chr-F: 0.475" ]
translation
transformers
### opus-mt-mt-sv * source languages: mt * target languages: sv * OPUS readme: [mt-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.mt.sv | 30.4 | 0.514 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mt-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mt", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #mt #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-mt-sv * source languages: mt * target languages: sv * OPUS readme: mt-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 30.4, chr-F: 0.514
[ "### opus-mt-mt-sv\n\n\n* source languages: mt\n* target languages: sv\n* OPUS readme: mt-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.514" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mt #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-mt-sv\n\n\n* source languages: mt\n* target languages: sv\n* OPUS readme: mt-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.514" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #mt #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-mt-sv\n\n\n* source languages: mt\n* target languages: sv\n* OPUS readme: mt-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.514" ]
translation
transformers
### mul-eng * source group: Multiple languages * target group: English * OPUS readme: [mul-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md) * model: transformer * source language(s): abk acm ady afb afh_Latn afr akl_Latn aln amh ang_Latn apc ara arg arq ary arz asm ast avk_Latn awa aze_Latn bak bam_Latn bel bel_Latn ben bho bod bos_Latn bre brx brx_Latn bul bul_Latn cat ceb ces cha che chr chv cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant cor cos crh crh_Latn csb_Latn cym dan deu dsb dtp dws_Latn egl ell enm_Latn epo est eus ewe ext fao fij fin fkv_Latn fra frm_Latn frr fry fuc fuv gan gcf_Latn gil gla gle glg glv gom gos got_Goth grc_Grek grn gsw guj hat hau_Latn haw heb hif_Latn hil hin hnj_Latn hoc hoc_Latn hrv hsb hun hye iba ibo ido ido_Latn ike_Latn ile_Latn ilo ina_Latn ind isl ita izh jav jav_Java jbo jbo_Cyrl jbo_Latn jdt_Cyrl jpn kab kal kan kat kaz_Cyrl kaz_Latn kek_Latn kha khm khm_Latn kin kir_Cyrl kjh kpv krl ksh kum kur_Arab kur_Latn lad lad_Latn lao lat_Latn lav ldn_Latn lfn_Cyrl lfn_Latn lij lin lit liv_Latn lkt lld_Latn lmo ltg ltz lug lzh lzh_Hans mad mah mai mal mar max_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob_Hebr nog non_Latn nov_Latn npi nya oci ori orv_Cyrl oss ota_Arab ota_Latn pag pan_Guru pap pau pdc pes pes_Latn pes_Thaa pms pnb pol por ppl_Latn prg_Latn pus quc qya qya_Latn rap rif_Latn roh rom ron rue run rus sag sah san_Deva scn sco sgs shs_Latn shy_Latn sin sjn_Latn slv sma sme smo sna snd_Arab som spa sqi srp_Cyrl srp_Latn stq sun swe swg swh tah tam tat tat_Arab tat_Latn tel tet tgk_Cyrl tha tir tlh_Latn tly_Latn tmw_Latn toi_Latn ton tpw_Latn tso tuk tuk_Latn tur tvl tyv tzl tzl_Latn udm uig_Arab uig_Cyrl ukr umb urd uzb_Cyrl uzb_Latn vec vie vie_Hani vol_Latn vro war wln wol wuu xal xho yid yor yue yue_Hans yue_Hant zho zho_Hans zho_Hant zlm_Latn zsm_Latn zul zza * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-hineng.hin.eng | 8.5 | 0.341 | | newsdev2015-enfi-fineng.fin.eng | 16.8 | 0.441 | | newsdev2016-enro-roneng.ron.eng | 31.3 | 0.580 | | newsdev2016-entr-tureng.tur.eng | 16.4 | 0.422 | | newsdev2017-enlv-laveng.lav.eng | 21.3 | 0.502 | | newsdev2017-enzh-zhoeng.zho.eng | 12.7 | 0.409 | | newsdev2018-enet-esteng.est.eng | 19.8 | 0.467 | | newsdev2019-engu-gujeng.guj.eng | 13.3 | 0.385 | | newsdev2019-enlt-liteng.lit.eng | 19.9 | 0.482 | | newsdiscussdev2015-enfr-fraeng.fra.eng | 26.7 | 0.520 | | newsdiscusstest2015-enfr-fraeng.fra.eng | 29.8 | 0.541 | | newssyscomb2009-ceseng.ces.eng | 21.1 | 0.487 | | newssyscomb2009-deueng.deu.eng | 22.6 | 0.499 | | newssyscomb2009-fraeng.fra.eng | 25.8 | 0.530 | | newssyscomb2009-huneng.hun.eng | 15.1 | 0.430 | | newssyscomb2009-itaeng.ita.eng | 29.4 | 0.555 | | newssyscomb2009-spaeng.spa.eng | 26.1 | 0.534 | | news-test2008-deueng.deu.eng | 21.6 | 0.491 | | news-test2008-fraeng.fra.eng | 22.3 | 0.502 | | news-test2008-spaeng.spa.eng | 23.6 | 0.514 | | newstest2009-ceseng.ces.eng | 19.8 | 0.480 | | newstest2009-deueng.deu.eng | 20.9 | 0.487 | | newstest2009-fraeng.fra.eng | 25.0 | 0.523 | | newstest2009-huneng.hun.eng | 14.7 | 0.425 | | newstest2009-itaeng.ita.eng | 27.6 | 0.542 | | newstest2009-spaeng.spa.eng | 25.7 | 0.530 | | newstest2010-ceseng.ces.eng | 20.6 | 0.491 | | newstest2010-deueng.deu.eng | 23.4 | 0.517 | | newstest2010-fraeng.fra.eng | 26.1 | 0.537 | | newstest2010-spaeng.spa.eng | 29.1 | 0.561 | | newstest2011-ceseng.ces.eng | 21.0 | 0.489 | | newstest2011-deueng.deu.eng | 21.3 | 0.494 | | newstest2011-fraeng.fra.eng | 26.8 | 0.546 | | newstest2011-spaeng.spa.eng | 28.2 | 0.549 | | newstest2012-ceseng.ces.eng | 20.5 | 0.485 | | newstest2012-deueng.deu.eng | 22.3 | 0.503 | | newstest2012-fraeng.fra.eng | 27.5 | 0.545 | | newstest2012-ruseng.rus.eng | 26.6 | 0.532 | | newstest2012-spaeng.spa.eng | 30.3 | 0.567 | | newstest2013-ceseng.ces.eng | 22.5 | 0.498 | | newstest2013-deueng.deu.eng | 25.0 | 0.518 | | newstest2013-fraeng.fra.eng | 27.4 | 0.537 | | newstest2013-ruseng.rus.eng | 21.6 | 0.484 | | newstest2013-spaeng.spa.eng | 28.4 | 0.555 | | newstest2014-csen-ceseng.ces.eng | 24.0 | 0.517 | | newstest2014-deen-deueng.deu.eng | 24.1 | 0.511 | | newstest2014-fren-fraeng.fra.eng | 29.1 | 0.563 | | newstest2014-hien-hineng.hin.eng | 14.0 | 0.414 | | newstest2014-ruen-ruseng.rus.eng | 24.0 | 0.521 | | newstest2015-encs-ceseng.ces.eng | 21.9 | 0.481 | | newstest2015-ende-deueng.deu.eng | 25.5 | 0.519 | | newstest2015-enfi-fineng.fin.eng | 17.4 | 0.441 | | newstest2015-enru-ruseng.rus.eng | 22.4 | 0.494 | | newstest2016-encs-ceseng.ces.eng | 23.0 | 0.500 | | newstest2016-ende-deueng.deu.eng | 30.1 | 0.560 | | newstest2016-enfi-fineng.fin.eng | 18.5 | 0.461 | | newstest2016-enro-roneng.ron.eng | 29.6 | 0.562 | | newstest2016-enru-ruseng.rus.eng | 22.0 | 0.495 | | newstest2016-entr-tureng.tur.eng | 14.8 | 0.415 | | newstest2017-encs-ceseng.ces.eng | 20.2 | 0.475 | | newstest2017-ende-deueng.deu.eng | 26.0 | 0.523 | | newstest2017-enfi-fineng.fin.eng | 19.6 | 0.465 | | newstest2017-enlv-laveng.lav.eng | 16.2 | 0.454 | | newstest2017-enru-ruseng.rus.eng | 24.2 | 0.510 | | newstest2017-entr-tureng.tur.eng | 15.0 | 0.412 | | newstest2017-enzh-zhoeng.zho.eng | 13.7 | 0.412 | | newstest2018-encs-ceseng.ces.eng | 21.2 | 0.486 | | newstest2018-ende-deueng.deu.eng | 31.5 | 0.564 | | newstest2018-enet-esteng.est.eng | 19.7 | 0.473 | | newstest2018-enfi-fineng.fin.eng | 15.1 | 0.418 | | newstest2018-enru-ruseng.rus.eng | 21.3 | 0.490 | | newstest2018-entr-tureng.tur.eng | 15.4 | 0.421 | | newstest2018-enzh-zhoeng.zho.eng | 12.9 | 0.408 | | newstest2019-deen-deueng.deu.eng | 27.0 | 0.529 | | newstest2019-fien-fineng.fin.eng | 17.2 | 0.438 | | newstest2019-guen-gujeng.guj.eng | 9.0 | 0.342 | | newstest2019-lten-liteng.lit.eng | 22.6 | 0.512 | | newstest2019-ruen-ruseng.rus.eng | 24.1 | 0.503 | | newstest2019-zhen-zhoeng.zho.eng | 13.9 | 0.427 | | newstestB2016-enfi-fineng.fin.eng | 15.2 | 0.428 | | newstestB2017-enfi-fineng.fin.eng | 16.8 | 0.442 | | newstestB2017-fien-fineng.fin.eng | 16.8 | 0.442 | | Tatoeba-test.abk-eng.abk.eng | 2.4 | 0.190 | | Tatoeba-test.ady-eng.ady.eng | 1.1 | 0.111 | | Tatoeba-test.afh-eng.afh.eng | 1.7 | 0.108 | | Tatoeba-test.afr-eng.afr.eng | 53.0 | 0.672 | | Tatoeba-test.akl-eng.akl.eng | 5.9 | 0.239 | | Tatoeba-test.amh-eng.amh.eng | 25.6 | 0.464 | | Tatoeba-test.ang-eng.ang.eng | 11.7 | 0.289 | | Tatoeba-test.ara-eng.ara.eng | 26.4 | 0.443 | | Tatoeba-test.arg-eng.arg.eng | 35.9 | 0.473 | | Tatoeba-test.asm-eng.asm.eng | 19.8 | 0.365 | | Tatoeba-test.ast-eng.ast.eng | 31.8 | 0.467 | | Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.119 | | Tatoeba-test.awa-eng.awa.eng | 9.7 | 0.271 | | Tatoeba-test.aze-eng.aze.eng | 37.0 | 0.542 | | Tatoeba-test.bak-eng.bak.eng | 13.9 | 0.395 | | Tatoeba-test.bam-eng.bam.eng | 2.2 | 0.094 | | Tatoeba-test.bel-eng.bel.eng | 36.8 | 0.549 | | Tatoeba-test.ben-eng.ben.eng | 39.7 | 0.546 | | Tatoeba-test.bho-eng.bho.eng | 33.6 | 0.540 | | Tatoeba-test.bod-eng.bod.eng | 1.1 | 0.147 | | Tatoeba-test.bre-eng.bre.eng | 14.2 | 0.303 | | Tatoeba-test.brx-eng.brx.eng | 1.7 | 0.130 | | Tatoeba-test.bul-eng.bul.eng | 46.0 | 0.621 | | Tatoeba-test.cat-eng.cat.eng | 46.6 | 0.636 | | Tatoeba-test.ceb-eng.ceb.eng | 17.4 | 0.347 | | Tatoeba-test.ces-eng.ces.eng | 41.3 | 0.586 | | Tatoeba-test.cha-eng.cha.eng | 7.9 | 0.232 | | Tatoeba-test.che-eng.che.eng | 0.7 | 0.104 | | Tatoeba-test.chm-eng.chm.eng | 7.3 | 0.261 | | Tatoeba-test.chr-eng.chr.eng | 8.8 | 0.244 | | Tatoeba-test.chv-eng.chv.eng | 11.0 | 0.319 | | Tatoeba-test.cor-eng.cor.eng | 5.4 | 0.204 | | Tatoeba-test.cos-eng.cos.eng | 58.2 | 0.643 | | Tatoeba-test.crh-eng.crh.eng | 26.3 | 0.399 | | Tatoeba-test.csb-eng.csb.eng | 18.8 | 0.389 | | Tatoeba-test.cym-eng.cym.eng | 23.4 | 0.407 | | Tatoeba-test.dan-eng.dan.eng | 50.5 | 0.659 | | Tatoeba-test.deu-eng.deu.eng | 39.6 | 0.579 | | Tatoeba-test.dsb-eng.dsb.eng | 24.3 | 0.449 | | Tatoeba-test.dtp-eng.dtp.eng | 1.0 | 0.149 | | Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.061 | | Tatoeba-test.egl-eng.egl.eng | 7.6 | 0.236 | | Tatoeba-test.ell-eng.ell.eng | 55.4 | 0.682 | | Tatoeba-test.enm-eng.enm.eng | 28.0 | 0.489 | | Tatoeba-test.epo-eng.epo.eng | 41.8 | 0.591 | | Tatoeba-test.est-eng.est.eng | 41.5 | 0.581 | | Tatoeba-test.eus-eng.eus.eng | 37.8 | 0.557 | | Tatoeba-test.ewe-eng.ewe.eng | 10.7 | 0.262 | | Tatoeba-test.ext-eng.ext.eng | 25.5 | 0.405 | | Tatoeba-test.fao-eng.fao.eng | 28.7 | 0.469 | | Tatoeba-test.fas-eng.fas.eng | 7.5 | 0.281 | | Tatoeba-test.fij-eng.fij.eng | 24.2 | 0.320 | | Tatoeba-test.fin-eng.fin.eng | 35.8 | 0.534 | | Tatoeba-test.fkv-eng.fkv.eng | 15.5 | 0.434 | | Tatoeba-test.fra-eng.fra.eng | 45.1 | 0.618 | | Tatoeba-test.frm-eng.frm.eng | 29.6 | 0.427 | | Tatoeba-test.frr-eng.frr.eng | 5.5 | 0.138 | | Tatoeba-test.fry-eng.fry.eng | 25.3 | 0.455 | | Tatoeba-test.ful-eng.ful.eng | 1.1 | 0.127 | | Tatoeba-test.gcf-eng.gcf.eng | 16.0 | 0.315 | | Tatoeba-test.gil-eng.gil.eng | 46.7 | 0.587 | | Tatoeba-test.gla-eng.gla.eng | 20.2 | 0.358 | | Tatoeba-test.gle-eng.gle.eng | 43.9 | 0.592 | | Tatoeba-test.glg-eng.glg.eng | 45.1 | 0.623 | | Tatoeba-test.glv-eng.glv.eng | 3.3 | 0.119 | | Tatoeba-test.gos-eng.gos.eng | 20.1 | 0.364 | | Tatoeba-test.got-eng.got.eng | 0.1 | 0.041 | | Tatoeba-test.grc-eng.grc.eng | 2.1 | 0.137 | | Tatoeba-test.grn-eng.grn.eng | 1.7 | 0.152 | | Tatoeba-test.gsw-eng.gsw.eng | 18.2 | 0.334 | | Tatoeba-test.guj-eng.guj.eng | 21.7 | 0.373 | | Tatoeba-test.hat-eng.hat.eng | 34.5 | 0.502 | | Tatoeba-test.hau-eng.hau.eng | 10.5 | 0.295 | | Tatoeba-test.haw-eng.haw.eng | 2.8 | 0.160 | | Tatoeba-test.hbs-eng.hbs.eng | 46.7 | 0.623 | | Tatoeba-test.heb-eng.heb.eng | 33.0 | 0.492 | | Tatoeba-test.hif-eng.hif.eng | 17.0 | 0.391 | | Tatoeba-test.hil-eng.hil.eng | 16.0 | 0.339 | | Tatoeba-test.hin-eng.hin.eng | 36.4 | 0.533 | | Tatoeba-test.hmn-eng.hmn.eng | 0.4 | 0.131 | | Tatoeba-test.hoc-eng.hoc.eng | 0.7 | 0.132 | | Tatoeba-test.hsb-eng.hsb.eng | 41.9 | 0.551 | | Tatoeba-test.hun-eng.hun.eng | 33.2 | 0.510 | | Tatoeba-test.hye-eng.hye.eng | 32.2 | 0.487 | | Tatoeba-test.iba-eng.iba.eng | 9.4 | 0.278 | | Tatoeba-test.ibo-eng.ibo.eng | 5.8 | 0.200 | | Tatoeba-test.ido-eng.ido.eng | 31.7 | 0.503 | | Tatoeba-test.iku-eng.iku.eng | 9.1 | 0.164 | | Tatoeba-test.ile-eng.ile.eng | 42.2 | 0.595 | | Tatoeba-test.ilo-eng.ilo.eng | 29.7 | 0.485 | | Tatoeba-test.ina-eng.ina.eng | 42.1 | 0.607 | | Tatoeba-test.isl-eng.isl.eng | 35.7 | 0.527 | | Tatoeba-test.ita-eng.ita.eng | 54.8 | 0.686 | | Tatoeba-test.izh-eng.izh.eng | 28.3 | 0.526 | | Tatoeba-test.jav-eng.jav.eng | 10.0 | 0.282 | | Tatoeba-test.jbo-eng.jbo.eng | 0.3 | 0.115 | | Tatoeba-test.jdt-eng.jdt.eng | 5.3 | 0.140 | | Tatoeba-test.jpn-eng.jpn.eng | 18.8 | 0.387 | | Tatoeba-test.kab-eng.kab.eng | 3.9 | 0.205 | | Tatoeba-test.kal-eng.kal.eng | 16.9 | 0.329 | | Tatoeba-test.kan-eng.kan.eng | 16.2 | 0.374 | | Tatoeba-test.kat-eng.kat.eng | 31.1 | 0.493 | | Tatoeba-test.kaz-eng.kaz.eng | 24.5 | 0.437 | | Tatoeba-test.kek-eng.kek.eng | 7.4 | 0.192 | | Tatoeba-test.kha-eng.kha.eng | 1.0 | 0.154 | | Tatoeba-test.khm-eng.khm.eng | 12.2 | 0.290 | | Tatoeba-test.kin-eng.kin.eng | 22.5 | 0.355 | | Tatoeba-test.kir-eng.kir.eng | 27.2 | 0.470 | | Tatoeba-test.kjh-eng.kjh.eng | 2.1 | 0.129 | | Tatoeba-test.kok-eng.kok.eng | 4.5 | 0.259 | | Tatoeba-test.kom-eng.kom.eng | 1.4 | 0.099 | | Tatoeba-test.krl-eng.krl.eng | 26.1 | 0.387 | | Tatoeba-test.ksh-eng.ksh.eng | 5.5 | 0.256 | | Tatoeba-test.kum-eng.kum.eng | 9.3 | 0.288 | | Tatoeba-test.kur-eng.kur.eng | 9.6 | 0.208 | | Tatoeba-test.lad-eng.lad.eng | 30.1 | 0.475 | | Tatoeba-test.lah-eng.lah.eng | 11.6 | 0.284 | | Tatoeba-test.lao-eng.lao.eng | 4.5 | 0.214 | | Tatoeba-test.lat-eng.lat.eng | 21.5 | 0.402 | | Tatoeba-test.lav-eng.lav.eng | 40.2 | 0.577 | | Tatoeba-test.ldn-eng.ldn.eng | 0.8 | 0.115 | | Tatoeba-test.lfn-eng.lfn.eng | 23.0 | 0.433 | | Tatoeba-test.lij-eng.lij.eng | 9.3 | 0.287 | | Tatoeba-test.lin-eng.lin.eng | 2.4 | 0.196 | | Tatoeba-test.lit-eng.lit.eng | 44.0 | 0.597 | | Tatoeba-test.liv-eng.liv.eng | 1.6 | 0.115 | | Tatoeba-test.lkt-eng.lkt.eng | 2.0 | 0.113 | | Tatoeba-test.lld-eng.lld.eng | 18.3 | 0.312 | | Tatoeba-test.lmo-eng.lmo.eng | 25.4 | 0.395 | | Tatoeba-test.ltz-eng.ltz.eng | 35.9 | 0.509 | | Tatoeba-test.lug-eng.lug.eng | 5.1 | 0.357 | | Tatoeba-test.mad-eng.mad.eng | 2.8 | 0.123 | | Tatoeba-test.mah-eng.mah.eng | 5.7 | 0.175 | | Tatoeba-test.mai-eng.mai.eng | 56.3 | 0.703 | | Tatoeba-test.mal-eng.mal.eng | 37.5 | 0.534 | | Tatoeba-test.mar-eng.mar.eng | 22.8 | 0.470 | | Tatoeba-test.mdf-eng.mdf.eng | 2.0 | 0.110 | | Tatoeba-test.mfe-eng.mfe.eng | 59.2 | 0.764 | | Tatoeba-test.mic-eng.mic.eng | 9.0 | 0.199 | | Tatoeba-test.mkd-eng.mkd.eng | 44.3 | 0.593 | | Tatoeba-test.mlg-eng.mlg.eng | 31.9 | 0.424 | | Tatoeba-test.mlt-eng.mlt.eng | 38.6 | 0.540 | | Tatoeba-test.mnw-eng.mnw.eng | 2.5 | 0.101 | | Tatoeba-test.moh-eng.moh.eng | 0.3 | 0.110 | | Tatoeba-test.mon-eng.mon.eng | 13.5 | 0.334 | | Tatoeba-test.mri-eng.mri.eng | 8.5 | 0.260 | | Tatoeba-test.msa-eng.msa.eng | 33.9 | 0.520 | | Tatoeba-test.multi.eng | 34.7 | 0.518 | | Tatoeba-test.mwl-eng.mwl.eng | 37.4 | 0.630 | | Tatoeba-test.mya-eng.mya.eng | 15.5 | 0.335 | | Tatoeba-test.myv-eng.myv.eng | 0.8 | 0.118 | | Tatoeba-test.nau-eng.nau.eng | 9.0 | 0.186 | | Tatoeba-test.nav-eng.nav.eng | 1.3 | 0.144 | | Tatoeba-test.nds-eng.nds.eng | 30.7 | 0.495 | | Tatoeba-test.nep-eng.nep.eng | 3.5 | 0.168 | | Tatoeba-test.niu-eng.niu.eng | 42.7 | 0.492 | | Tatoeba-test.nld-eng.nld.eng | 47.9 | 0.640 | | Tatoeba-test.nog-eng.nog.eng | 12.7 | 0.284 | | Tatoeba-test.non-eng.non.eng | 43.8 | 0.586 | | Tatoeba-test.nor-eng.nor.eng | 45.5 | 0.619 | | Tatoeba-test.nov-eng.nov.eng | 26.9 | 0.472 | | Tatoeba-test.nya-eng.nya.eng | 33.2 | 0.456 | | Tatoeba-test.oci-eng.oci.eng | 17.9 | 0.370 | | Tatoeba-test.ori-eng.ori.eng | 14.6 | 0.305 | | Tatoeba-test.orv-eng.orv.eng | 11.0 | 0.283 | | Tatoeba-test.oss-eng.oss.eng | 4.1 | 0.211 | | Tatoeba-test.ota-eng.ota.eng | 4.1 | 0.216 | | Tatoeba-test.pag-eng.pag.eng | 24.3 | 0.468 | | Tatoeba-test.pan-eng.pan.eng | 16.4 | 0.358 | | Tatoeba-test.pap-eng.pap.eng | 53.2 | 0.628 | | Tatoeba-test.pau-eng.pau.eng | 3.7 | 0.173 | | Tatoeba-test.pdc-eng.pdc.eng | 45.3 | 0.569 | | Tatoeba-test.pms-eng.pms.eng | 14.0 | 0.345 | | Tatoeba-test.pol-eng.pol.eng | 41.7 | 0.588 | | Tatoeba-test.por-eng.por.eng | 51.4 | 0.669 | | Tatoeba-test.ppl-eng.ppl.eng | 0.4 | 0.134 | | Tatoeba-test.prg-eng.prg.eng | 4.1 | 0.198 | | Tatoeba-test.pus-eng.pus.eng | 6.7 | 0.233 | | Tatoeba-test.quc-eng.quc.eng | 3.5 | 0.091 | | Tatoeba-test.qya-eng.qya.eng | 0.2 | 0.090 | | Tatoeba-test.rap-eng.rap.eng | 17.5 | 0.230 | | Tatoeba-test.rif-eng.rif.eng | 4.2 | 0.164 | | Tatoeba-test.roh-eng.roh.eng | 24.6 | 0.464 | | Tatoeba-test.rom-eng.rom.eng | 3.4 | 0.212 | | Tatoeba-test.ron-eng.ron.eng | 45.2 | 0.620 | | Tatoeba-test.rue-eng.rue.eng | 21.4 | 0.390 | | Tatoeba-test.run-eng.run.eng | 24.5 | 0.392 | | Tatoeba-test.rus-eng.rus.eng | 42.7 | 0.591 | | Tatoeba-test.sag-eng.sag.eng | 3.4 | 0.187 | | Tatoeba-test.sah-eng.sah.eng | 5.0 | 0.177 | | Tatoeba-test.san-eng.san.eng | 2.0 | 0.172 | | Tatoeba-test.scn-eng.scn.eng | 35.8 | 0.410 | | Tatoeba-test.sco-eng.sco.eng | 34.6 | 0.520 | | Tatoeba-test.sgs-eng.sgs.eng | 21.8 | 0.299 | | Tatoeba-test.shs-eng.shs.eng | 1.8 | 0.122 | | Tatoeba-test.shy-eng.shy.eng | 1.4 | 0.104 | | Tatoeba-test.sin-eng.sin.eng | 20.6 | 0.429 | | Tatoeba-test.sjn-eng.sjn.eng | 1.2 | 0.095 | | Tatoeba-test.slv-eng.slv.eng | 37.0 | 0.545 | | Tatoeba-test.sma-eng.sma.eng | 4.4 | 0.147 | | Tatoeba-test.sme-eng.sme.eng | 8.9 | 0.229 | | Tatoeba-test.smo-eng.smo.eng | 37.7 | 0.483 | | Tatoeba-test.sna-eng.sna.eng | 18.0 | 0.359 | | Tatoeba-test.snd-eng.snd.eng | 28.1 | 0.444 | | Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 | | Tatoeba-test.spa-eng.spa.eng | 47.9 | 0.645 | | Tatoeba-test.sqi-eng.sqi.eng | 46.9 | 0.634 | | Tatoeba-test.stq-eng.stq.eng | 8.1 | 0.379 | | Tatoeba-test.sun-eng.sun.eng | 23.8 | 0.369 | | Tatoeba-test.swa-eng.swa.eng | 6.5 | 0.193 | | Tatoeba-test.swe-eng.swe.eng | 51.4 | 0.655 | | Tatoeba-test.swg-eng.swg.eng | 18.5 | 0.342 | | Tatoeba-test.tah-eng.tah.eng | 25.6 | 0.249 | | Tatoeba-test.tam-eng.tam.eng | 29.1 | 0.437 | | Tatoeba-test.tat-eng.tat.eng | 12.9 | 0.327 | | Tatoeba-test.tel-eng.tel.eng | 21.2 | 0.386 | | Tatoeba-test.tet-eng.tet.eng | 9.2 | 0.215 | | Tatoeba-test.tgk-eng.tgk.eng | 12.7 | 0.374 | | Tatoeba-test.tha-eng.tha.eng | 36.3 | 0.531 | | Tatoeba-test.tir-eng.tir.eng | 9.1 | 0.267 | | Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.084 | | Tatoeba-test.tly-eng.tly.eng | 2.1 | 0.128 | | Tatoeba-test.toi-eng.toi.eng | 5.3 | 0.150 | | Tatoeba-test.ton-eng.ton.eng | 39.5 | 0.473 | | Tatoeba-test.tpw-eng.tpw.eng | 1.5 | 0.160 | | Tatoeba-test.tso-eng.tso.eng | 44.7 | 0.526 | | Tatoeba-test.tuk-eng.tuk.eng | 18.6 | 0.401 | | Tatoeba-test.tur-eng.tur.eng | 40.5 | 0.573 | | Tatoeba-test.tvl-eng.tvl.eng | 55.0 | 0.593 | | Tatoeba-test.tyv-eng.tyv.eng | 19.1 | 0.477 | | Tatoeba-test.tzl-eng.tzl.eng | 17.7 | 0.333 | | Tatoeba-test.udm-eng.udm.eng | 3.4 | 0.217 | | Tatoeba-test.uig-eng.uig.eng | 11.4 | 0.289 | | Tatoeba-test.ukr-eng.ukr.eng | 43.1 | 0.595 | | Tatoeba-test.umb-eng.umb.eng | 9.2 | 0.260 | | Tatoeba-test.urd-eng.urd.eng | 23.2 | 0.426 | | Tatoeba-test.uzb-eng.uzb.eng | 19.0 | 0.342 | | Tatoeba-test.vec-eng.vec.eng | 41.1 | 0.409 | | Tatoeba-test.vie-eng.vie.eng | 30.6 | 0.481 | | Tatoeba-test.vol-eng.vol.eng | 1.8 | 0.143 | | Tatoeba-test.war-eng.war.eng | 15.9 | 0.352 | | Tatoeba-test.wln-eng.wln.eng | 12.6 | 0.291 | | Tatoeba-test.wol-eng.wol.eng | 4.4 | 0.138 | | Tatoeba-test.xal-eng.xal.eng | 0.9 | 0.153 | | Tatoeba-test.xho-eng.xho.eng | 35.4 | 0.513 | | Tatoeba-test.yid-eng.yid.eng | 19.4 | 0.387 | | Tatoeba-test.yor-eng.yor.eng | 19.3 | 0.327 | | Tatoeba-test.zho-eng.zho.eng | 25.8 | 0.448 | | Tatoeba-test.zul-eng.zul.eng | 40.9 | 0.567 | | Tatoeba-test.zza-eng.zza.eng | 1.6 | 0.125 | ### System Info: - hf_name: mul-eng - source_languages: mul - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul', 'en'] - src_constituents: {'sjn_Latn', 'cat', 'nan', 'spa', 'ile_Latn', 'pap', 'mwl', 'uzb_Latn', 'mww', 'hil', 'lij', 'avk_Latn', 'lad_Latn', 'lat_Latn', 'bos_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi_Latn', 'awa', 'swg', 'zsm_Latn', 'zho_Hant', 'gcf_Latn', 'uzb_Cyrl', 'isl', 'lfn_Latn', 'shs_Latn', 'nov_Latn', 'bho', 'ltz', 'lzh', 'kur_Latn', 'sun', 'arg', 'pes_Thaa', 'sqi', 'uig_Arab', 'csb_Latn', 'fra', 'hat', 'liv_Latn', 'non_Latn', 'sco', 'cmn_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul_Latn', 'amh', 'lfn_Cyrl', 'eus', 'fkv_Latn', 'tur', 'pus', 'afr', 'brx_Latn', 'nya', 'acm', 'ota_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho_Hans', 'tmw_Latn', 'kjh', 'ota_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh_Hans', 'ara', 'tly_Latn', 'lug', 'brx', 'bul', 'bel', 'vol_Latn', 'kat', 'gan', 'got_Goth', 'vro', 'ext', 'afh_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif_Latn', 'cjy_Hant', 'bre', 'ceb', 'mah', 'nob_Hebr', 'crh_Latn', 'prg_Latn', 'khm', 'ang_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie_Hani', 'arz', 'yue', 'kha', 'san_Deva', 'jbo_Latn', 'gos', 'hau_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig_Cyrl', 'fao', 'mnw', 'zho', 'orv_Cyrl', 'iba', 'bel_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc_Grek', 'tpw_Latn', 'oci', 'mfe', 'sna', 'kir_Cyrl', 'tat_Latn', 'gom', 'ido_Latn', 'sgs', 'pau', 'tgk_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp_Latn', 'wuu', 'dtp', 'jbo_Cyrl', 'tet', 'bod', 'yue_Hans', 'zlm_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif_Latn', 'vie', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina_Latn', 'cjy_Hans', 'jdt_Cyrl', 'gsw', 'glv', 'khm_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd_Arab', 'arq', 'mri', 'kur_Arab', 'por', 'hin', 'shy_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue_Hant', 'kpv', 'tam', 'est', 'frm_Latn', 'hoc_Latn', 'bam_Latn', 'kek_Latn', 'ksh', 'tlh_Latn', 'ltg', 'pan_Guru', 'hnj_Latn', 'cor', 'gle', 'swe', 'lin', 'qya_Latn', 'kum', 'mad', 'cmn_Hant', 'fuv', 'nau', 'mon', 'akl_Latn', 'guj', 'kaz_Latn', 'wln', 'tuk_Latn', 'jav_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws_Latn', 'urd', 'stq', 'tat_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld_Latn', 'tzl_Latn', 'mdf', 'ike_Latn', 'ces', 'ldn_Latn', 'egl', 'heb', 'vec', 'zul', 'max_Latn', 'pes_Latn', 'yid', 'mal', 'nds'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt - src_alpha3: mul - tgt_alpha3: eng - short_pair: mul-en - chrF2_score: 0.518 - bleu: 34.7 - brevity_penalty: 1.0 - ref_len: 72346.0 - src_name: Multiple languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: mul - tgt_alpha2: en - prefer_old: False - long_pair: mul-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ca", "es", "os", "eo", "ro", "fy", "cy", "is", "lb", "su", "an", "sq", "fr", "ht", "rm", "cv", "ig", "am", "eu", "tr", "ps", "af", "ny", "ch", "uk", "sl", "lt", "tk", "sg", "ar", "lg", "bg", "be", "ka", "gd", "ja", "si", "br", "mh", "km", "th", "ty", "rw", "te", "mk", "or", "wo", "kl", "mr", "ru", "yo", "hu", "fo", "zh", "ti", "co", "ee", "oc", "sn", "mt", "ts", "pl", "gl", "nb", "bn", "tt", "bo", "lo", "id", "gn", "nv", "hy", "kn", "to", "io", "so", "vi", "da", "fj", "gv", "sm", "nl", "mi", "pt", "hi", "se", "as", "ta", "et", "kw", "ga", "sv", "ln", "na", "mn", "gu", "wa", "lv", "jv", "el", "my", "ba", "it", "hr", "ur", "ce", "nn", "fi", "mg", "rn", "xh", "ab", "de", "cs", "he", "zu", "yi", "ml", "mul", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-mul-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ca", "es", "os", "eo", "ro", "fy", "cy", "is", "lb", "su", "an", "sq", "fr", "ht", "rm", "cv", "ig", "am", "eu", "tr", "ps", "af", "ny", "ch", "uk", "sl", "lt", "tk", "sg", "ar", "lg", "bg", "be", "ka", "gd", "ja", "si", "br", "mh", "km", "th", "ty", "rw", "te", "mk", "or", "wo", "kl", "mr", "ru", "yo", "hu", "fo", "zh", "ti", "co", "ee", "oc", "sn", "mt", "ts", "pl", "gl", "nb", "bn", "tt", "bo", "lo", "id", "gn", "nv", "hy", "kn", "to", "io", "so", "vi", "da", "fj", "gv", "sm", "nl", "mi", "pt", "hi", "se", "as", "ta", "et", "kw", "ga", "sv", "ln", "na", "mn", "gu", "wa", "lv", "jv", "el", "my", "ba", "it", "hr", "ur", "ce", "nn", "fi", "mg", "rn", "xh", "ab", "de", "cs", "he", "zu", "yi", "ml", "mul", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ca", "es", "os", "eo", "ro", "fy", "cy", "is", "lb", "su", "an", "sq", "fr", "ht", "rm", "cv", "ig", "am", "eu", "tr", "ps", "af", "ny", "ch", "uk", "sl", "lt", "tk", "sg", "ar", "lg", "bg", "be", "ka", "gd", "ja", "si", "br", "mh", "km", "th", "ty", "rw", "te", "mk", "or", "wo", "kl", "mr", "ru", "yo", "hu", "fo", "zh", "ti", "co", "ee", "oc", "sn", "mt", "ts", "pl", "gl", "nb", "bn", "tt", "bo", "lo", "id", "gn", "nv", "hy", "kn", "to", "io", "so", "vi", "da", "fj", "gv", "sm", "nl", "mi", "pt", "hi", "se", "as", "ta", "et", "kw", "ga", "sv", "ln", "na", "mn", "gu", "wa", "lv", "jv", "el", "my", "ba", "it", "hr", "ur", "ce", "nn", "fi", "mg", "rn", "xh", "ab", "de", "cs", "he", "zu", "yi", "ml", "mul", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ca #es #os #eo #ro #fy #cy #is #lb #su #an #sq #fr #ht #rm #cv #ig #am #eu #tr #ps #af #ny #ch #uk #sl #lt #tk #sg #ar #lg #bg #be #ka #gd #ja #si #br #mh #km #th #ty #rw #te #mk #or #wo #kl #mr #ru #yo #hu #fo #zh #ti #co #ee #oc #sn #mt #ts #pl #gl #nb #bn #tt #bo #lo #id #gn #nv #hy #kn #to #io #so #vi #da #fj #gv #sm #nl #mi #pt #hi #se #as #ta #et #kw #ga #sv #ln #na #mn #gu #wa #lv #jv #el #my #ba #it #hr #ur #ce #nn #fi #mg #rn #xh #ab #de #cs #he #zu #yi #ml #mul #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### mul-eng * source group: Multiple languages * target group: English * OPUS readme: mul-eng * model: transformer * source language(s): abk acm ady afb afh\_Latn afr akl\_Latn aln amh ang\_Latn apc ara arg arq ary arz asm ast avk\_Latn awa aze\_Latn bak bam\_Latn bel bel\_Latn ben bho bod bos\_Latn bre brx brx\_Latn bul bul\_Latn cat ceb ces cha che chr chv cjy\_Hans cjy\_Hant cmn cmn\_Hans cmn\_Hant cor cos crh crh\_Latn csb\_Latn cym dan deu dsb dtp dws\_Latn egl ell enm\_Latn epo est eus ewe ext fao fij fin fkv\_Latn fra frm\_Latn frr fry fuc fuv gan gcf\_Latn gil gla gle glg glv gom gos got\_Goth grc\_Grek grn gsw guj hat hau\_Latn haw heb hif\_Latn hil hin hnj\_Latn hoc hoc\_Latn hrv hsb hun hye iba ibo ido ido\_Latn ike\_Latn ile\_Latn ilo ina\_Latn ind isl ita izh jav jav\_Java jbo jbo\_Cyrl jbo\_Latn jdt\_Cyrl jpn kab kal kan kat kaz\_Cyrl kaz\_Latn kek\_Latn kha khm khm\_Latn kin kir\_Cyrl kjh kpv krl ksh kum kur\_Arab kur\_Latn lad lad\_Latn lao lat\_Latn lav ldn\_Latn lfn\_Cyrl lfn\_Latn lij lin lit liv\_Latn lkt lld\_Latn lmo ltg ltz lug lzh lzh\_Hans mad mah mai mal mar max\_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob\_Hebr nog non\_Latn nov\_Latn npi nya oci ori orv\_Cyrl oss ota\_Arab ota\_Latn pag pan\_Guru pap pau pdc pes pes\_Latn pes\_Thaa pms pnb pol por ppl\_Latn prg\_Latn pus quc qya qya\_Latn rap rif\_Latn roh rom ron rue run rus sag sah san\_Deva scn sco sgs shs\_Latn shy\_Latn sin sjn\_Latn slv sma sme smo sna snd\_Arab som spa sqi srp\_Cyrl srp\_Latn stq sun swe swg swh tah tam tat tat\_Arab tat\_Latn tel tet tgk\_Cyrl tha tir tlh\_Latn tly\_Latn tmw\_Latn toi\_Latn ton tpw\_Latn tso tuk tuk\_Latn tur tvl tyv tzl tzl\_Latn udm uig\_Arab uig\_Cyrl ukr umb urd uzb\_Cyrl uzb\_Latn vec vie vie\_Hani vol\_Latn vro war wln wol wuu xal xho yid yor yue yue\_Hans yue\_Hant zho zho\_Hans zho\_Hant zlm\_Latn zsm\_Latn zul zza * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 8.5, chr-F: 0.341 testset: URL, BLEU: 16.8, chr-F: 0.441 testset: URL, BLEU: 31.3, chr-F: 0.580 testset: URL, BLEU: 16.4, chr-F: 0.422 testset: URL, BLEU: 21.3, chr-F: 0.502 testset: URL, BLEU: 12.7, chr-F: 0.409 testset: URL, BLEU: 19.8, chr-F: 0.467 testset: URL, BLEU: 13.3, chr-F: 0.385 testset: URL, BLEU: 19.9, chr-F: 0.482 testset: URL, BLEU: 26.7, chr-F: 0.520 testset: URL, BLEU: 29.8, chr-F: 0.541 testset: URL, BLEU: 21.1, chr-F: 0.487 testset: URL, BLEU: 22.6, chr-F: 0.499 testset: URL, BLEU: 25.8, chr-F: 0.530 testset: URL, BLEU: 15.1, chr-F: 0.430 testset: URL, BLEU: 29.4, chr-F: 0.555 testset: URL, BLEU: 26.1, chr-F: 0.534 testset: URL, BLEU: 21.6, chr-F: 0.491 testset: URL, BLEU: 22.3, chr-F: 0.502 testset: URL, BLEU: 23.6, chr-F: 0.514 testset: URL, BLEU: 19.8, chr-F: 0.480 testset: URL, BLEU: 20.9, chr-F: 0.487 testset: URL, BLEU: 25.0, chr-F: 0.523 testset: URL, BLEU: 14.7, chr-F: 0.425 testset: URL, BLEU: 27.6, chr-F: 0.542 testset: URL, BLEU: 25.7, chr-F: 0.530 testset: URL, BLEU: 20.6, chr-F: 0.491 testset: URL, BLEU: 23.4, chr-F: 0.517 testset: URL, BLEU: 26.1, chr-F: 0.537 testset: URL, BLEU: 29.1, chr-F: 0.561 testset: URL, BLEU: 21.0, chr-F: 0.489 testset: URL, BLEU: 21.3, chr-F: 0.494 testset: URL, BLEU: 26.8, chr-F: 0.546 testset: URL, BLEU: 28.2, chr-F: 0.549 testset: URL, BLEU: 20.5, chr-F: 0.485 testset: URL, BLEU: 22.3, chr-F: 0.503 testset: URL, BLEU: 27.5, chr-F: 0.545 testset: URL, BLEU: 26.6, chr-F: 0.532 testset: URL, BLEU: 30.3, chr-F: 0.567 testset: URL, BLEU: 22.5, chr-F: 0.498 testset: URL, BLEU: 25.0, chr-F: 0.518 testset: URL, BLEU: 27.4, chr-F: 0.537 testset: URL, BLEU: 21.6, chr-F: 0.484 testset: URL, BLEU: 28.4, chr-F: 0.555 testset: URL, BLEU: 24.0, chr-F: 0.517 testset: URL, BLEU: 24.1, chr-F: 0.511 testset: URL, BLEU: 29.1, chr-F: 0.563 testset: URL, BLEU: 14.0, chr-F: 0.414 testset: URL, BLEU: 24.0, chr-F: 0.521 testset: URL, BLEU: 21.9, chr-F: 0.481 testset: URL, BLEU: 25.5, chr-F: 0.519 testset: URL, BLEU: 17.4, chr-F: 0.441 testset: URL, BLEU: 22.4, chr-F: 0.494 testset: URL, BLEU: 23.0, chr-F: 0.500 testset: URL, BLEU: 30.1, chr-F: 0.560 testset: URL, BLEU: 18.5, chr-F: 0.461 testset: URL, BLEU: 29.6, chr-F: 0.562 testset: URL, BLEU: 22.0, chr-F: 0.495 testset: URL, BLEU: 14.8, chr-F: 0.415 testset: URL, BLEU: 20.2, chr-F: 0.475 testset: URL, BLEU: 26.0, chr-F: 0.523 testset: URL, BLEU: 19.6, chr-F: 0.465 testset: URL, BLEU: 16.2, chr-F: 0.454 testset: URL, BLEU: 24.2, chr-F: 0.510 testset: URL, BLEU: 15.0, chr-F: 0.412 testset: URL, BLEU: 13.7, chr-F: 0.412 testset: URL, BLEU: 21.2, chr-F: 0.486 testset: URL, BLEU: 31.5, chr-F: 0.564 testset: URL, BLEU: 19.7, chr-F: 0.473 testset: URL, BLEU: 15.1, chr-F: 0.418 testset: URL, BLEU: 21.3, chr-F: 0.490 testset: URL, BLEU: 15.4, chr-F: 0.421 testset: URL, BLEU: 12.9, chr-F: 0.408 testset: URL, BLEU: 27.0, chr-F: 0.529 testset: URL, BLEU: 17.2, chr-F: 0.438 testset: URL, BLEU: 9.0, chr-F: 0.342 testset: URL, BLEU: 22.6, chr-F: 0.512 testset: URL, BLEU: 24.1, chr-F: 0.503 testset: URL, BLEU: 13.9, chr-F: 0.427 testset: URL, BLEU: 15.2, chr-F: 0.428 testset: URL, BLEU: 16.8, chr-F: 0.442 testset: URL, BLEU: 16.8, chr-F: 0.442 testset: URL, BLEU: 2.4, chr-F: 0.190 testset: URL, BLEU: 1.1, chr-F: 0.111 testset: URL, BLEU: 1.7, chr-F: 0.108 testset: URL, BLEU: 53.0, chr-F: 0.672 testset: URL, BLEU: 5.9, chr-F: 0.239 testset: URL, BLEU: 25.6, chr-F: 0.464 testset: URL, BLEU: 11.7, chr-F: 0.289 testset: URL, BLEU: 26.4, chr-F: 0.443 testset: URL, BLEU: 35.9, chr-F: 0.473 testset: URL, BLEU: 19.8, chr-F: 0.365 testset: URL, BLEU: 31.8, chr-F: 0.467 testset: URL, BLEU: 0.4, chr-F: 0.119 testset: URL, BLEU: 9.7, chr-F: 0.271 testset: URL, BLEU: 37.0, chr-F: 0.542 testset: URL, BLEU: 13.9, chr-F: 0.395 testset: URL, BLEU: 2.2, chr-F: 0.094 testset: URL, BLEU: 36.8, chr-F: 0.549 testset: URL, BLEU: 39.7, chr-F: 0.546 testset: URL, BLEU: 33.6, chr-F: 0.540 testset: URL, BLEU: 1.1, chr-F: 0.147 testset: URL, BLEU: 14.2, chr-F: 0.303 testset: URL, BLEU: 1.7, chr-F: 0.130 testset: URL, BLEU: 46.0, chr-F: 0.621 testset: URL, BLEU: 46.6, chr-F: 0.636 testset: URL, BLEU: 17.4, chr-F: 0.347 testset: URL, BLEU: 41.3, chr-F: 0.586 testset: URL, BLEU: 7.9, chr-F: 0.232 testset: URL, BLEU: 0.7, chr-F: 0.104 testset: URL, BLEU: 7.3, chr-F: 0.261 testset: URL, BLEU: 8.8, chr-F: 0.244 testset: URL, BLEU: 11.0, chr-F: 0.319 testset: URL, BLEU: 5.4, chr-F: 0.204 testset: URL, BLEU: 58.2, chr-F: 0.643 testset: URL, BLEU: 26.3, chr-F: 0.399 testset: URL, BLEU: 18.8, chr-F: 0.389 testset: URL, BLEU: 23.4, chr-F: 0.407 testset: URL, BLEU: 50.5, chr-F: 0.659 testset: URL, BLEU: 39.6, chr-F: 0.579 testset: URL, BLEU: 24.3, chr-F: 0.449 testset: URL, BLEU: 1.0, chr-F: 0.149 testset: URL, BLEU: 1.6, chr-F: 0.061 testset: URL, BLEU: 7.6, chr-F: 0.236 testset: URL, BLEU: 55.4, chr-F: 0.682 testset: URL, BLEU: 28.0, chr-F: 0.489 testset: URL, BLEU: 41.8, chr-F: 0.591 testset: URL, BLEU: 41.5, chr-F: 0.581 testset: URL, BLEU: 37.8, chr-F: 0.557 testset: URL, BLEU: 10.7, chr-F: 0.262 testset: URL, BLEU: 25.5, chr-F: 0.405 testset: URL, BLEU: 28.7, chr-F: 0.469 testset: URL, BLEU: 7.5, chr-F: 0.281 testset: URL, BLEU: 24.2, chr-F: 0.320 testset: URL, BLEU: 35.8, chr-F: 0.534 testset: URL, BLEU: 15.5, chr-F: 0.434 testset: URL, BLEU: 45.1, chr-F: 0.618 testset: URL, BLEU: 29.6, chr-F: 0.427 testset: URL, BLEU: 5.5, chr-F: 0.138 testset: URL, BLEU: 25.3, chr-F: 0.455 testset: URL, BLEU: 1.1, chr-F: 0.127 testset: URL, BLEU: 16.0, chr-F: 0.315 testset: URL, BLEU: 46.7, chr-F: 0.587 testset: URL, BLEU: 20.2, chr-F: 0.358 testset: URL, BLEU: 43.9, chr-F: 0.592 testset: URL, BLEU: 45.1, chr-F: 0.623 testset: URL, BLEU: 3.3, chr-F: 0.119 testset: URL, BLEU: 20.1, chr-F: 0.364 testset: URL, BLEU: 0.1, chr-F: 0.041 testset: URL, BLEU: 2.1, chr-F: 0.137 testset: URL, BLEU: 1.7, chr-F: 0.152 testset: URL, BLEU: 18.2, chr-F: 0.334 testset: URL, BLEU: 21.7, chr-F: 0.373 testset: URL, BLEU: 34.5, chr-F: 0.502 testset: URL, BLEU: 10.5, chr-F: 0.295 testset: URL, BLEU: 2.8, chr-F: 0.160 testset: URL, BLEU: 46.7, chr-F: 0.623 testset: URL, BLEU: 33.0, chr-F: 0.492 testset: URL, BLEU: 17.0, chr-F: 0.391 testset: URL, BLEU: 16.0, chr-F: 0.339 testset: URL, BLEU: 36.4, chr-F: 0.533 testset: URL, BLEU: 0.4, chr-F: 0.131 testset: URL, BLEU: 0.7, chr-F: 0.132 testset: URL, BLEU: 41.9, chr-F: 0.551 testset: URL, BLEU: 33.2, chr-F: 0.510 testset: URL, BLEU: 32.2, chr-F: 0.487 testset: URL, BLEU: 9.4, chr-F: 0.278 testset: URL, BLEU: 5.8, chr-F: 0.200 testset: URL, BLEU: 31.7, chr-F: 0.503 testset: URL, BLEU: 9.1, chr-F: 0.164 testset: URL, BLEU: 42.2, chr-F: 0.595 testset: URL, BLEU: 29.7, chr-F: 0.485 testset: URL, BLEU: 42.1, chr-F: 0.607 testset: URL, BLEU: 35.7, chr-F: 0.527 testset: URL, BLEU: 54.8, chr-F: 0.686 testset: URL, BLEU: 28.3, chr-F: 0.526 testset: URL, BLEU: 10.0, chr-F: 0.282 testset: URL, BLEU: 0.3, chr-F: 0.115 testset: URL, BLEU: 5.3, chr-F: 0.140 testset: URL, BLEU: 18.8, chr-F: 0.387 testset: URL, BLEU: 3.9, chr-F: 0.205 testset: URL, BLEU: 16.9, chr-F: 0.329 testset: URL, BLEU: 16.2, chr-F: 0.374 testset: URL, BLEU: 31.1, chr-F: 0.493 testset: URL, BLEU: 24.5, chr-F: 0.437 testset: URL, BLEU: 7.4, chr-F: 0.192 testset: URL, BLEU: 1.0, chr-F: 0.154 testset: URL, BLEU: 12.2, chr-F: 0.290 testset: URL, BLEU: 22.5, chr-F: 0.355 testset: URL, BLEU: 27.2, chr-F: 0.470 testset: URL, BLEU: 2.1, chr-F: 0.129 testset: URL, BLEU: 4.5, chr-F: 0.259 testset: URL, BLEU: 1.4, chr-F: 0.099 testset: URL, BLEU: 26.1, chr-F: 0.387 testset: URL, BLEU: 5.5, chr-F: 0.256 testset: URL, BLEU: 9.3, chr-F: 0.288 testset: URL, BLEU: 9.6, chr-F: 0.208 testset: URL, BLEU: 30.1, chr-F: 0.475 testset: URL, BLEU: 11.6, chr-F: 0.284 testset: URL, BLEU: 4.5, chr-F: 0.214 testset: URL, BLEU: 21.5, chr-F: 0.402 testset: URL, BLEU: 40.2, chr-F: 0.577 testset: URL, BLEU: 0.8, chr-F: 0.115 testset: URL, BLEU: 23.0, chr-F: 0.433 testset: URL, BLEU: 9.3, chr-F: 0.287 testset: URL, BLEU: 2.4, chr-F: 0.196 testset: URL, BLEU: 44.0, chr-F: 0.597 testset: URL, BLEU: 1.6, chr-F: 0.115 testset: URL, BLEU: 2.0, chr-F: 0.113 testset: URL, BLEU: 18.3, chr-F: 0.312 testset: URL, BLEU: 25.4, chr-F: 0.395 testset: URL, BLEU: 35.9, chr-F: 0.509 testset: URL, BLEU: 5.1, chr-F: 0.357 testset: URL, BLEU: 2.8, chr-F: 0.123 testset: URL, BLEU: 5.7, chr-F: 0.175 testset: URL, BLEU: 56.3, chr-F: 0.703 testset: URL, BLEU: 37.5, chr-F: 0.534 testset: URL, BLEU: 22.8, chr-F: 0.470 testset: URL, BLEU: 2.0, chr-F: 0.110 testset: URL, BLEU: 59.2, chr-F: 0.764 testset: URL, BLEU: 9.0, chr-F: 0.199 testset: URL, BLEU: 44.3, chr-F: 0.593 testset: URL, BLEU: 31.9, chr-F: 0.424 testset: URL, BLEU: 38.6, chr-F: 0.540 testset: URL, BLEU: 2.5, chr-F: 0.101 testset: URL, BLEU: 0.3, chr-F: 0.110 testset: URL, BLEU: 13.5, chr-F: 0.334 testset: URL, BLEU: 8.5, chr-F: 0.260 testset: URL, BLEU: 33.9, chr-F: 0.520 testset: URL, BLEU: 34.7, chr-F: 0.518 testset: URL, BLEU: 37.4, chr-F: 0.630 testset: URL, BLEU: 15.5, chr-F: 0.335 testset: URL, BLEU: 0.8, chr-F: 0.118 testset: URL, BLEU: 9.0, chr-F: 0.186 testset: URL, BLEU: 1.3, chr-F: 0.144 testset: URL, BLEU: 30.7, chr-F: 0.495 testset: URL, BLEU: 3.5, chr-F: 0.168 testset: URL, BLEU: 42.7, chr-F: 0.492 testset: URL, BLEU: 47.9, chr-F: 0.640 testset: URL, BLEU: 12.7, chr-F: 0.284 testset: URL, BLEU: 43.8, chr-F: 0.586 testset: URL, BLEU: 45.5, chr-F: 0.619 testset: URL, BLEU: 26.9, chr-F: 0.472 testset: URL, BLEU: 33.2, chr-F: 0.456 testset: URL, BLEU: 17.9, chr-F: 0.370 testset: URL, BLEU: 14.6, chr-F: 0.305 testset: URL, BLEU: 11.0, chr-F: 0.283 testset: URL, BLEU: 4.1, chr-F: 0.211 testset: URL, BLEU: 4.1, chr-F: 0.216 testset: URL, BLEU: 24.3, chr-F: 0.468 testset: URL, BLEU: 16.4, chr-F: 0.358 testset: URL, BLEU: 53.2, chr-F: 0.628 testset: URL, BLEU: 3.7, chr-F: 0.173 testset: URL, BLEU: 45.3, chr-F: 0.569 testset: URL, BLEU: 14.0, chr-F: 0.345 testset: URL, BLEU: 41.7, chr-F: 0.588 testset: URL, BLEU: 51.4, chr-F: 0.669 testset: URL, BLEU: 0.4, chr-F: 0.134 testset: URL, BLEU: 4.1, chr-F: 0.198 testset: URL, BLEU: 6.7, chr-F: 0.233 testset: URL, BLEU: 3.5, chr-F: 0.091 testset: URL, BLEU: 0.2, chr-F: 0.090 testset: URL, BLEU: 17.5, chr-F: 0.230 testset: URL, BLEU: 4.2, chr-F: 0.164 testset: URL, BLEU: 24.6, chr-F: 0.464 testset: URL, BLEU: 3.4, chr-F: 0.212 testset: URL, BLEU: 45.2, chr-F: 0.620 testset: URL, BLEU: 21.4, chr-F: 0.390 testset: URL, BLEU: 24.5, chr-F: 0.392 testset: URL, BLEU: 42.7, chr-F: 0.591 testset: URL, BLEU: 3.4, chr-F: 0.187 testset: URL, BLEU: 5.0, chr-F: 0.177 testset: URL, BLEU: 2.0, chr-F: 0.172 testset: URL, BLEU: 35.8, chr-F: 0.410 testset: URL, BLEU: 34.6, chr-F: 0.520 testset: URL, BLEU: 21.8, chr-F: 0.299 testset: URL, BLEU: 1.8, chr-F: 0.122 testset: URL, BLEU: 1.4, chr-F: 0.104 testset: URL, BLEU: 20.6, chr-F: 0.429 testset: URL, BLEU: 1.2, chr-F: 0.095 testset: URL, BLEU: 37.0, chr-F: 0.545 testset: URL, BLEU: 4.4, chr-F: 0.147 testset: URL, BLEU: 8.9, chr-F: 0.229 testset: URL, BLEU: 37.7, chr-F: 0.483 testset: URL, BLEU: 18.0, chr-F: 0.359 testset: URL, BLEU: 28.1, chr-F: 0.444 testset: URL, BLEU: 23.6, chr-F: 0.472 testset: URL, BLEU: 47.9, chr-F: 0.645 testset: URL, BLEU: 46.9, chr-F: 0.634 testset: URL, BLEU: 8.1, chr-F: 0.379 testset: URL, BLEU: 23.8, chr-F: 0.369 testset: URL, BLEU: 6.5, chr-F: 0.193 testset: URL, BLEU: 51.4, chr-F: 0.655 testset: URL, BLEU: 18.5, chr-F: 0.342 testset: URL, BLEU: 25.6, chr-F: 0.249 testset: URL, BLEU: 29.1, chr-F: 0.437 testset: URL, BLEU: 12.9, chr-F: 0.327 testset: URL, BLEU: 21.2, chr-F: 0.386 testset: URL, BLEU: 9.2, chr-F: 0.215 testset: URL, BLEU: 12.7, chr-F: 0.374 testset: URL, BLEU: 36.3, chr-F: 0.531 testset: URL, BLEU: 9.1, chr-F: 0.267 testset: URL, BLEU: 0.2, chr-F: 0.084 testset: URL, BLEU: 2.1, chr-F: 0.128 testset: URL, BLEU: 5.3, chr-F: 0.150 testset: URL, BLEU: 39.5, chr-F: 0.473 testset: URL, BLEU: 1.5, chr-F: 0.160 testset: URL, BLEU: 44.7, chr-F: 0.526 testset: URL, BLEU: 18.6, chr-F: 0.401 testset: URL, BLEU: 40.5, chr-F: 0.573 testset: URL, BLEU: 55.0, chr-F: 0.593 testset: URL, BLEU: 19.1, chr-F: 0.477 testset: URL, BLEU: 17.7, chr-F: 0.333 testset: URL, BLEU: 3.4, chr-F: 0.217 testset: URL, BLEU: 11.4, chr-F: 0.289 testset: URL, BLEU: 43.1, chr-F: 0.595 testset: URL, BLEU: 9.2, chr-F: 0.260 testset: URL, BLEU: 23.2, chr-F: 0.426 testset: URL, BLEU: 19.0, chr-F: 0.342 testset: URL, BLEU: 41.1, chr-F: 0.409 testset: URL, BLEU: 30.6, chr-F: 0.481 testset: URL, BLEU: 1.8, chr-F: 0.143 testset: URL, BLEU: 15.9, chr-F: 0.352 testset: URL, BLEU: 12.6, chr-F: 0.291 testset: URL, BLEU: 4.4, chr-F: 0.138 testset: URL, BLEU: 0.9, chr-F: 0.153 testset: URL, BLEU: 35.4, chr-F: 0.513 testset: URL, BLEU: 19.4, chr-F: 0.387 testset: URL, BLEU: 19.3, chr-F: 0.327 testset: URL, BLEU: 25.8, chr-F: 0.448 testset: URL, BLEU: 40.9, chr-F: 0.567 testset: URL, BLEU: 1.6, chr-F: 0.125 ### System Info: * hf\_name: mul-eng * source\_languages: mul * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul', 'en'] * src\_constituents: {'sjn\_Latn', 'cat', 'nan', 'spa', 'ile\_Latn', 'pap', 'mwl', 'uzb\_Latn', 'mww', 'hil', 'lij', 'avk\_Latn', 'lad\_Latn', 'lat\_Latn', 'bos\_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi\_Latn', 'awa', 'swg', 'zsm\_Latn', 'zho\_Hant', 'gcf\_Latn', 'uzb\_Cyrl', 'isl', 'lfn\_Latn', 'shs\_Latn', 'nov\_Latn', 'bho', 'ltz', 'lzh', 'kur\_Latn', 'sun', 'arg', 'pes\_Thaa', 'sqi', 'uig\_Arab', 'csb\_Latn', 'fra', 'hat', 'liv\_Latn', 'non\_Latn', 'sco', 'cmn\_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul\_Latn', 'amh', 'lfn\_Cyrl', 'eus', 'fkv\_Latn', 'tur', 'pus', 'afr', 'brx\_Latn', 'nya', 'acm', 'ota\_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho\_Hans', 'tmw\_Latn', 'kjh', 'ota\_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh\_Hans', 'ara', 'tly\_Latn', 'lug', 'brx', 'bul', 'bel', 'vol\_Latn', 'kat', 'gan', 'got\_Goth', 'vro', 'ext', 'afh\_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif\_Latn', 'cjy\_Hant', 'bre', 'ceb', 'mah', 'nob\_Hebr', 'crh\_Latn', 'prg\_Latn', 'khm', 'ang\_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze\_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie\_Hani', 'arz', 'yue', 'kha', 'san\_Deva', 'jbo\_Latn', 'gos', 'hau\_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig\_Cyrl', 'fao', 'mnw', 'zho', 'orv\_Cyrl', 'iba', 'bel\_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc\_Grek', 'tpw\_Latn', 'oci', 'mfe', 'sna', 'kir\_Cyrl', 'tat\_Latn', 'gom', 'ido\_Latn', 'sgs', 'pau', 'tgk\_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp\_Latn', 'wuu', 'dtp', 'jbo\_Cyrl', 'tet', 'bod', 'yue\_Hans', 'zlm\_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz\_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif\_Latn', 'vie', 'enm\_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina\_Latn', 'cjy\_Hans', 'jdt\_Cyrl', 'gsw', 'glv', 'khm\_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd\_Arab', 'arq', 'mri', 'kur\_Arab', 'por', 'hin', 'shy\_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue\_Hant', 'kpv', 'tam', 'est', 'frm\_Latn', 'hoc\_Latn', 'bam\_Latn', 'kek\_Latn', 'ksh', 'tlh\_Latn', 'ltg', 'pan\_Guru', 'hnj\_Latn', 'cor', 'gle', 'swe', 'lin', 'qya\_Latn', 'kum', 'mad', 'cmn\_Hant', 'fuv', 'nau', 'mon', 'akl\_Latn', 'guj', 'kaz\_Latn', 'wln', 'tuk\_Latn', 'jav\_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws\_Latn', 'urd', 'stq', 'tat\_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl\_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld\_Latn', 'tzl\_Latn', 'mdf', 'ike\_Latn', 'ces', 'ldn\_Latn', 'egl', 'heb', 'vec', 'zul', 'max\_Latn', 'pes\_Latn', 'yid', 'mal', 'nds'} * tgt\_constituents: {'eng'} * src\_multilingual: True * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: mul * tgt\_alpha3: eng * short\_pair: mul-en * chrF2\_score: 0.518 * bleu: 34.7 * brevity\_penalty: 1.0 * ref\_len: 72346.0 * src\_name: Multiple languages * tgt\_name: English * train\_date: 2020-08-01 * src\_alpha2: mul * tgt\_alpha2: en * prefer\_old: False * long\_pair: mul-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### mul-eng\n\n\n* source group: Multiple languages\n* target group: English\n* OPUS readme: mul-eng\n* model: transformer\n* source language(s): abk acm ady afb afh\\_Latn afr akl\\_Latn aln amh ang\\_Latn apc ara arg arq ary arz asm ast avk\\_Latn awa aze\\_Latn bak bam\\_Latn bel bel\\_Latn ben bho bod bos\\_Latn bre brx brx\\_Latn bul bul\\_Latn cat ceb ces cha che chr chv cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant cor cos crh crh\\_Latn csb\\_Latn cym dan deu dsb dtp dws\\_Latn egl ell enm\\_Latn epo est eus ewe ext fao fij fin fkv\\_Latn fra frm\\_Latn frr fry fuc fuv gan gcf\\_Latn gil gla gle glg glv gom gos got\\_Goth grc\\_Grek grn gsw guj hat hau\\_Latn haw heb hif\\_Latn hil hin hnj\\_Latn hoc hoc\\_Latn hrv hsb hun hye iba ibo ido ido\\_Latn ike\\_Latn ile\\_Latn ilo ina\\_Latn ind isl ita izh jav jav\\_Java jbo jbo\\_Cyrl jbo\\_Latn jdt\\_Cyrl jpn kab kal kan kat kaz\\_Cyrl kaz\\_Latn kek\\_Latn kha khm khm\\_Latn kin kir\\_Cyrl kjh kpv krl ksh kum kur\\_Arab kur\\_Latn lad lad\\_Latn lao lat\\_Latn lav ldn\\_Latn lfn\\_Cyrl lfn\\_Latn lij lin lit liv\\_Latn lkt lld\\_Latn lmo ltg ltz lug lzh lzh\\_Hans mad mah mai mal mar max\\_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob\\_Hebr nog non\\_Latn nov\\_Latn npi nya oci ori orv\\_Cyrl oss ota\\_Arab ota\\_Latn pag pan\\_Guru pap pau pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por ppl\\_Latn prg\\_Latn pus quc qya qya\\_Latn rap rif\\_Latn roh rom ron rue run rus sag sah san\\_Deva scn sco sgs shs\\_Latn shy\\_Latn sin sjn\\_Latn slv sma sme smo sna snd\\_Arab som spa sqi srp\\_Cyrl srp\\_Latn stq sun swe swg swh tah tam tat tat\\_Arab tat\\_Latn tel tet tgk\\_Cyrl tha tir tlh\\_Latn tly\\_Latn tmw\\_Latn toi\\_Latn ton tpw\\_Latn tso tuk tuk\\_Latn tur tvl tyv tzl tzl\\_Latn udm uig\\_Arab uig\\_Cyrl ukr umb urd uzb\\_Cyrl uzb\\_Latn vec vie vie\\_Hani vol\\_Latn vro war wln wol wuu xal xho yid yor yue yue\\_Hans yue\\_Hant zho zho\\_Hans zho\\_Hant zlm\\_Latn zsm\\_Latn zul zza\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.5, chr-F: 0.341\ntestset: URL, BLEU: 16.8, chr-F: 0.441\ntestset: URL, BLEU: 31.3, chr-F: 0.580\ntestset: URL, BLEU: 16.4, chr-F: 0.422\ntestset: URL, BLEU: 21.3, chr-F: 0.502\ntestset: URL, BLEU: 12.7, chr-F: 0.409\ntestset: URL, BLEU: 19.8, chr-F: 0.467\ntestset: URL, BLEU: 13.3, chr-F: 0.385\ntestset: URL, BLEU: 19.9, chr-F: 0.482\ntestset: URL, BLEU: 26.7, chr-F: 0.520\ntestset: URL, BLEU: 29.8, chr-F: 0.541\ntestset: URL, BLEU: 21.1, chr-F: 0.487\ntestset: URL, BLEU: 22.6, chr-F: 0.499\ntestset: URL, BLEU: 25.8, chr-F: 0.530\ntestset: URL, BLEU: 15.1, chr-F: 0.430\ntestset: URL, BLEU: 29.4, chr-F: 0.555\ntestset: URL, BLEU: 26.1, chr-F: 0.534\ntestset: URL, BLEU: 21.6, chr-F: 0.491\ntestset: URL, BLEU: 22.3, chr-F: 0.502\ntestset: URL, BLEU: 23.6, chr-F: 0.514\ntestset: URL, BLEU: 19.8, chr-F: 0.480\ntestset: URL, BLEU: 20.9, chr-F: 0.487\ntestset: URL, BLEU: 25.0, chr-F: 0.523\ntestset: URL, BLEU: 14.7, chr-F: 0.425\ntestset: URL, BLEU: 27.6, chr-F: 0.542\ntestset: URL, BLEU: 25.7, chr-F: 0.530\ntestset: URL, BLEU: 20.6, chr-F: 0.491\ntestset: URL, BLEU: 23.4, chr-F: 0.517\ntestset: URL, BLEU: 26.1, chr-F: 0.537\ntestset: URL, BLEU: 29.1, chr-F: 0.561\ntestset: URL, BLEU: 21.0, chr-F: 0.489\ntestset: URL, BLEU: 21.3, chr-F: 0.494\ntestset: URL, BLEU: 26.8, chr-F: 0.546\ntestset: URL, BLEU: 28.2, chr-F: 0.549\ntestset: URL, BLEU: 20.5, chr-F: 0.485\ntestset: URL, BLEU: 22.3, chr-F: 0.503\ntestset: URL, BLEU: 27.5, chr-F: 0.545\ntestset: URL, BLEU: 26.6, chr-F: 0.532\ntestset: URL, BLEU: 30.3, chr-F: 0.567\ntestset: URL, BLEU: 22.5, chr-F: 0.498\ntestset: URL, BLEU: 25.0, chr-F: 0.518\ntestset: URL, BLEU: 27.4, chr-F: 0.537\ntestset: URL, BLEU: 21.6, chr-F: 0.484\ntestset: URL, BLEU: 28.4, chr-F: 0.555\ntestset: URL, BLEU: 24.0, chr-F: 0.517\ntestset: URL, BLEU: 24.1, chr-F: 0.511\ntestset: URL, BLEU: 29.1, chr-F: 0.563\ntestset: URL, BLEU: 14.0, chr-F: 0.414\ntestset: URL, BLEU: 24.0, chr-F: 0.521\ntestset: URL, BLEU: 21.9, chr-F: 0.481\ntestset: URL, BLEU: 25.5, chr-F: 0.519\ntestset: URL, BLEU: 17.4, chr-F: 0.441\ntestset: URL, BLEU: 22.4, chr-F: 0.494\ntestset: URL, BLEU: 23.0, chr-F: 0.500\ntestset: URL, BLEU: 30.1, chr-F: 0.560\ntestset: URL, BLEU: 18.5, chr-F: 0.461\ntestset: URL, BLEU: 29.6, chr-F: 0.562\ntestset: URL, BLEU: 22.0, chr-F: 0.495\ntestset: URL, BLEU: 14.8, chr-F: 0.415\ntestset: URL, BLEU: 20.2, chr-F: 0.475\ntestset: URL, BLEU: 26.0, chr-F: 0.523\ntestset: URL, BLEU: 19.6, chr-F: 0.465\ntestset: URL, BLEU: 16.2, chr-F: 0.454\ntestset: URL, BLEU: 24.2, chr-F: 0.510\ntestset: URL, BLEU: 15.0, chr-F: 0.412\ntestset: URL, BLEU: 13.7, chr-F: 0.412\ntestset: URL, BLEU: 21.2, chr-F: 0.486\ntestset: URL, BLEU: 31.5, chr-F: 0.564\ntestset: URL, BLEU: 19.7, chr-F: 0.473\ntestset: URL, BLEU: 15.1, chr-F: 0.418\ntestset: URL, BLEU: 21.3, chr-F: 0.490\ntestset: URL, BLEU: 15.4, chr-F: 0.421\ntestset: URL, BLEU: 12.9, chr-F: 0.408\ntestset: URL, BLEU: 27.0, chr-F: 0.529\ntestset: URL, BLEU: 17.2, chr-F: 0.438\ntestset: URL, BLEU: 9.0, chr-F: 0.342\ntestset: URL, BLEU: 22.6, chr-F: 0.512\ntestset: URL, BLEU: 24.1, chr-F: 0.503\ntestset: URL, BLEU: 13.9, chr-F: 0.427\ntestset: URL, BLEU: 15.2, chr-F: 0.428\ntestset: URL, BLEU: 16.8, chr-F: 0.442\ntestset: URL, BLEU: 16.8, chr-F: 0.442\ntestset: URL, BLEU: 2.4, chr-F: 0.190\ntestset: URL, BLEU: 1.1, chr-F: 0.111\ntestset: URL, BLEU: 1.7, chr-F: 0.108\ntestset: URL, BLEU: 53.0, chr-F: 0.672\ntestset: URL, BLEU: 5.9, chr-F: 0.239\ntestset: URL, BLEU: 25.6, chr-F: 0.464\ntestset: URL, BLEU: 11.7, chr-F: 0.289\ntestset: URL, BLEU: 26.4, chr-F: 0.443\ntestset: URL, BLEU: 35.9, chr-F: 0.473\ntestset: URL, BLEU: 19.8, chr-F: 0.365\ntestset: URL, BLEU: 31.8, chr-F: 0.467\ntestset: URL, BLEU: 0.4, chr-F: 0.119\ntestset: URL, BLEU: 9.7, chr-F: 0.271\ntestset: URL, BLEU: 37.0, chr-F: 0.542\ntestset: URL, BLEU: 13.9, chr-F: 0.395\ntestset: URL, BLEU: 2.2, chr-F: 0.094\ntestset: URL, BLEU: 36.8, chr-F: 0.549\ntestset: URL, BLEU: 39.7, chr-F: 0.546\ntestset: URL, BLEU: 33.6, chr-F: 0.540\ntestset: URL, BLEU: 1.1, chr-F: 0.147\ntestset: URL, BLEU: 14.2, chr-F: 0.303\ntestset: URL, BLEU: 1.7, chr-F: 0.130\ntestset: URL, BLEU: 46.0, chr-F: 0.621\ntestset: URL, BLEU: 46.6, chr-F: 0.636\ntestset: URL, BLEU: 17.4, chr-F: 0.347\ntestset: URL, BLEU: 41.3, chr-F: 0.586\ntestset: URL, BLEU: 7.9, chr-F: 0.232\ntestset: URL, BLEU: 0.7, chr-F: 0.104\ntestset: URL, BLEU: 7.3, chr-F: 0.261\ntestset: URL, BLEU: 8.8, chr-F: 0.244\ntestset: URL, BLEU: 11.0, chr-F: 0.319\ntestset: URL, BLEU: 5.4, chr-F: 0.204\ntestset: URL, BLEU: 58.2, chr-F: 0.643\ntestset: URL, BLEU: 26.3, chr-F: 0.399\ntestset: URL, BLEU: 18.8, chr-F: 0.389\ntestset: URL, BLEU: 23.4, chr-F: 0.407\ntestset: URL, BLEU: 50.5, chr-F: 0.659\ntestset: URL, BLEU: 39.6, chr-F: 0.579\ntestset: URL, BLEU: 24.3, chr-F: 0.449\ntestset: URL, BLEU: 1.0, chr-F: 0.149\ntestset: URL, BLEU: 1.6, chr-F: 0.061\ntestset: URL, BLEU: 7.6, chr-F: 0.236\ntestset: URL, BLEU: 55.4, chr-F: 0.682\ntestset: URL, BLEU: 28.0, chr-F: 0.489\ntestset: URL, BLEU: 41.8, chr-F: 0.591\ntestset: URL, BLEU: 41.5, chr-F: 0.581\ntestset: URL, BLEU: 37.8, chr-F: 0.557\ntestset: URL, BLEU: 10.7, chr-F: 0.262\ntestset: URL, BLEU: 25.5, chr-F: 0.405\ntestset: URL, BLEU: 28.7, chr-F: 0.469\ntestset: URL, BLEU: 7.5, chr-F: 0.281\ntestset: URL, BLEU: 24.2, chr-F: 0.320\ntestset: URL, BLEU: 35.8, chr-F: 0.534\ntestset: URL, BLEU: 15.5, chr-F: 0.434\ntestset: URL, BLEU: 45.1, chr-F: 0.618\ntestset: URL, BLEU: 29.6, chr-F: 0.427\ntestset: URL, BLEU: 5.5, chr-F: 0.138\ntestset: URL, BLEU: 25.3, chr-F: 0.455\ntestset: URL, BLEU: 1.1, chr-F: 0.127\ntestset: URL, BLEU: 16.0, chr-F: 0.315\ntestset: URL, BLEU: 46.7, chr-F: 0.587\ntestset: URL, BLEU: 20.2, chr-F: 0.358\ntestset: URL, BLEU: 43.9, chr-F: 0.592\ntestset: URL, BLEU: 45.1, chr-F: 0.623\ntestset: URL, BLEU: 3.3, chr-F: 0.119\ntestset: URL, BLEU: 20.1, chr-F: 0.364\ntestset: URL, BLEU: 0.1, chr-F: 0.041\ntestset: URL, BLEU: 2.1, chr-F: 0.137\ntestset: URL, BLEU: 1.7, chr-F: 0.152\ntestset: URL, BLEU: 18.2, chr-F: 0.334\ntestset: URL, BLEU: 21.7, chr-F: 0.373\ntestset: URL, BLEU: 34.5, chr-F: 0.502\ntestset: URL, BLEU: 10.5, chr-F: 0.295\ntestset: URL, BLEU: 2.8, chr-F: 0.160\ntestset: URL, BLEU: 46.7, chr-F: 0.623\ntestset: URL, BLEU: 33.0, chr-F: 0.492\ntestset: URL, BLEU: 17.0, chr-F: 0.391\ntestset: URL, BLEU: 16.0, chr-F: 0.339\ntestset: URL, BLEU: 36.4, chr-F: 0.533\ntestset: URL, BLEU: 0.4, chr-F: 0.131\ntestset: URL, BLEU: 0.7, chr-F: 0.132\ntestset: URL, BLEU: 41.9, chr-F: 0.551\ntestset: URL, BLEU: 33.2, chr-F: 0.510\ntestset: URL, BLEU: 32.2, chr-F: 0.487\ntestset: URL, BLEU: 9.4, chr-F: 0.278\ntestset: URL, BLEU: 5.8, chr-F: 0.200\ntestset: URL, BLEU: 31.7, chr-F: 0.503\ntestset: URL, BLEU: 9.1, chr-F: 0.164\ntestset: URL, BLEU: 42.2, chr-F: 0.595\ntestset: URL, BLEU: 29.7, chr-F: 0.485\ntestset: URL, BLEU: 42.1, chr-F: 0.607\ntestset: URL, BLEU: 35.7, chr-F: 0.527\ntestset: URL, BLEU: 54.8, chr-F: 0.686\ntestset: URL, BLEU: 28.3, chr-F: 0.526\ntestset: URL, BLEU: 10.0, chr-F: 0.282\ntestset: URL, BLEU: 0.3, chr-F: 0.115\ntestset: URL, BLEU: 5.3, chr-F: 0.140\ntestset: URL, BLEU: 18.8, chr-F: 0.387\ntestset: URL, BLEU: 3.9, chr-F: 0.205\ntestset: URL, BLEU: 16.9, chr-F: 0.329\ntestset: URL, BLEU: 16.2, chr-F: 0.374\ntestset: URL, BLEU: 31.1, chr-F: 0.493\ntestset: URL, BLEU: 24.5, chr-F: 0.437\ntestset: URL, BLEU: 7.4, chr-F: 0.192\ntestset: URL, BLEU: 1.0, chr-F: 0.154\ntestset: URL, BLEU: 12.2, chr-F: 0.290\ntestset: URL, BLEU: 22.5, chr-F: 0.355\ntestset: URL, BLEU: 27.2, chr-F: 0.470\ntestset: URL, BLEU: 2.1, chr-F: 0.129\ntestset: URL, BLEU: 4.5, chr-F: 0.259\ntestset: URL, BLEU: 1.4, chr-F: 0.099\ntestset: URL, BLEU: 26.1, chr-F: 0.387\ntestset: URL, BLEU: 5.5, chr-F: 0.256\ntestset: URL, BLEU: 9.3, chr-F: 0.288\ntestset: URL, BLEU: 9.6, chr-F: 0.208\ntestset: URL, BLEU: 30.1, chr-F: 0.475\ntestset: URL, BLEU: 11.6, chr-F: 0.284\ntestset: URL, BLEU: 4.5, chr-F: 0.214\ntestset: URL, BLEU: 21.5, chr-F: 0.402\ntestset: URL, BLEU: 40.2, chr-F: 0.577\ntestset: URL, BLEU: 0.8, chr-F: 0.115\ntestset: URL, BLEU: 23.0, chr-F: 0.433\ntestset: URL, BLEU: 9.3, chr-F: 0.287\ntestset: URL, BLEU: 2.4, chr-F: 0.196\ntestset: URL, BLEU: 44.0, chr-F: 0.597\ntestset: URL, BLEU: 1.6, chr-F: 0.115\ntestset: URL, BLEU: 2.0, chr-F: 0.113\ntestset: URL, BLEU: 18.3, chr-F: 0.312\ntestset: URL, BLEU: 25.4, chr-F: 0.395\ntestset: URL, BLEU: 35.9, chr-F: 0.509\ntestset: URL, BLEU: 5.1, chr-F: 0.357\ntestset: URL, BLEU: 2.8, chr-F: 0.123\ntestset: URL, BLEU: 5.7, chr-F: 0.175\ntestset: URL, BLEU: 56.3, chr-F: 0.703\ntestset: URL, BLEU: 37.5, chr-F: 0.534\ntestset: URL, BLEU: 22.8, chr-F: 0.470\ntestset: URL, BLEU: 2.0, chr-F: 0.110\ntestset: URL, BLEU: 59.2, chr-F: 0.764\ntestset: URL, BLEU: 9.0, chr-F: 0.199\ntestset: URL, BLEU: 44.3, chr-F: 0.593\ntestset: URL, BLEU: 31.9, chr-F: 0.424\ntestset: URL, BLEU: 38.6, chr-F: 0.540\ntestset: URL, BLEU: 2.5, chr-F: 0.101\ntestset: URL, BLEU: 0.3, chr-F: 0.110\ntestset: URL, BLEU: 13.5, chr-F: 0.334\ntestset: URL, BLEU: 8.5, chr-F: 0.260\ntestset: URL, BLEU: 33.9, chr-F: 0.520\ntestset: URL, BLEU: 34.7, chr-F: 0.518\ntestset: URL, BLEU: 37.4, chr-F: 0.630\ntestset: URL, BLEU: 15.5, chr-F: 0.335\ntestset: URL, BLEU: 0.8, chr-F: 0.118\ntestset: URL, BLEU: 9.0, chr-F: 0.186\ntestset: URL, BLEU: 1.3, chr-F: 0.144\ntestset: URL, BLEU: 30.7, chr-F: 0.495\ntestset: URL, BLEU: 3.5, chr-F: 0.168\ntestset: URL, BLEU: 42.7, chr-F: 0.492\ntestset: URL, BLEU: 47.9, chr-F: 0.640\ntestset: URL, BLEU: 12.7, chr-F: 0.284\ntestset: URL, BLEU: 43.8, chr-F: 0.586\ntestset: URL, BLEU: 45.5, chr-F: 0.619\ntestset: URL, BLEU: 26.9, chr-F: 0.472\ntestset: URL, BLEU: 33.2, chr-F: 0.456\ntestset: URL, BLEU: 17.9, chr-F: 0.370\ntestset: URL, BLEU: 14.6, chr-F: 0.305\ntestset: URL, BLEU: 11.0, chr-F: 0.283\ntestset: URL, BLEU: 4.1, chr-F: 0.211\ntestset: URL, BLEU: 4.1, chr-F: 0.216\ntestset: URL, BLEU: 24.3, chr-F: 0.468\ntestset: URL, BLEU: 16.4, chr-F: 0.358\ntestset: URL, BLEU: 53.2, chr-F: 0.628\ntestset: URL, BLEU: 3.7, chr-F: 0.173\ntestset: URL, BLEU: 45.3, chr-F: 0.569\ntestset: URL, BLEU: 14.0, chr-F: 0.345\ntestset: URL, BLEU: 41.7, chr-F: 0.588\ntestset: URL, BLEU: 51.4, chr-F: 0.669\ntestset: URL, BLEU: 0.4, chr-F: 0.134\ntestset: URL, BLEU: 4.1, chr-F: 0.198\ntestset: URL, BLEU: 6.7, chr-F: 0.233\ntestset: URL, BLEU: 3.5, chr-F: 0.091\ntestset: URL, BLEU: 0.2, chr-F: 0.090\ntestset: URL, BLEU: 17.5, chr-F: 0.230\ntestset: URL, BLEU: 4.2, chr-F: 0.164\ntestset: URL, BLEU: 24.6, chr-F: 0.464\ntestset: URL, BLEU: 3.4, chr-F: 0.212\ntestset: URL, BLEU: 45.2, chr-F: 0.620\ntestset: URL, BLEU: 21.4, chr-F: 0.390\ntestset: URL, BLEU: 24.5, chr-F: 0.392\ntestset: URL, BLEU: 42.7, chr-F: 0.591\ntestset: URL, BLEU: 3.4, chr-F: 0.187\ntestset: URL, BLEU: 5.0, chr-F: 0.177\ntestset: URL, BLEU: 2.0, chr-F: 0.172\ntestset: URL, BLEU: 35.8, chr-F: 0.410\ntestset: URL, BLEU: 34.6, chr-F: 0.520\ntestset: URL, BLEU: 21.8, chr-F: 0.299\ntestset: URL, BLEU: 1.8, chr-F: 0.122\ntestset: URL, BLEU: 1.4, chr-F: 0.104\ntestset: URL, BLEU: 20.6, chr-F: 0.429\ntestset: URL, BLEU: 1.2, chr-F: 0.095\ntestset: URL, BLEU: 37.0, chr-F: 0.545\ntestset: URL, BLEU: 4.4, chr-F: 0.147\ntestset: URL, BLEU: 8.9, chr-F: 0.229\ntestset: URL, BLEU: 37.7, chr-F: 0.483\ntestset: URL, BLEU: 18.0, chr-F: 0.359\ntestset: URL, BLEU: 28.1, chr-F: 0.444\ntestset: URL, BLEU: 23.6, chr-F: 0.472\ntestset: URL, BLEU: 47.9, chr-F: 0.645\ntestset: URL, BLEU: 46.9, chr-F: 0.634\ntestset: URL, BLEU: 8.1, chr-F: 0.379\ntestset: URL, BLEU: 23.8, chr-F: 0.369\ntestset: URL, BLEU: 6.5, chr-F: 0.193\ntestset: URL, BLEU: 51.4, chr-F: 0.655\ntestset: URL, BLEU: 18.5, chr-F: 0.342\ntestset: URL, BLEU: 25.6, chr-F: 0.249\ntestset: URL, BLEU: 29.1, chr-F: 0.437\ntestset: URL, BLEU: 12.9, chr-F: 0.327\ntestset: URL, BLEU: 21.2, chr-F: 0.386\ntestset: URL, BLEU: 9.2, chr-F: 0.215\ntestset: URL, BLEU: 12.7, chr-F: 0.374\ntestset: URL, BLEU: 36.3, chr-F: 0.531\ntestset: URL, BLEU: 9.1, chr-F: 0.267\ntestset: URL, BLEU: 0.2, chr-F: 0.084\ntestset: URL, BLEU: 2.1, chr-F: 0.128\ntestset: URL, BLEU: 5.3, chr-F: 0.150\ntestset: URL, BLEU: 39.5, chr-F: 0.473\ntestset: URL, BLEU: 1.5, chr-F: 0.160\ntestset: URL, BLEU: 44.7, chr-F: 0.526\ntestset: URL, BLEU: 18.6, chr-F: 0.401\ntestset: URL, BLEU: 40.5, chr-F: 0.573\ntestset: URL, BLEU: 55.0, chr-F: 0.593\ntestset: URL, BLEU: 19.1, chr-F: 0.477\ntestset: URL, BLEU: 17.7, chr-F: 0.333\ntestset: URL, BLEU: 3.4, chr-F: 0.217\ntestset: URL, BLEU: 11.4, chr-F: 0.289\ntestset: URL, BLEU: 43.1, chr-F: 0.595\ntestset: URL, BLEU: 9.2, chr-F: 0.260\ntestset: URL, BLEU: 23.2, chr-F: 0.426\ntestset: URL, BLEU: 19.0, chr-F: 0.342\ntestset: URL, BLEU: 41.1, chr-F: 0.409\ntestset: URL, BLEU: 30.6, chr-F: 0.481\ntestset: URL, BLEU: 1.8, chr-F: 0.143\ntestset: URL, BLEU: 15.9, chr-F: 0.352\ntestset: URL, BLEU: 12.6, chr-F: 0.291\ntestset: URL, BLEU: 4.4, chr-F: 0.138\ntestset: URL, BLEU: 0.9, chr-F: 0.153\ntestset: URL, BLEU: 35.4, chr-F: 0.513\ntestset: URL, BLEU: 19.4, chr-F: 0.387\ntestset: URL, BLEU: 19.3, chr-F: 0.327\ntestset: URL, BLEU: 25.8, chr-F: 0.448\ntestset: URL, BLEU: 40.9, chr-F: 0.567\ntestset: URL, BLEU: 1.6, chr-F: 0.125", "### System Info:\n\n\n* hf\\_name: mul-eng\n* source\\_languages: mul\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul', 'en']\n* src\\_constituents: {'sjn\\_Latn', 'cat', 'nan', 'spa', 'ile\\_Latn', 'pap', 'mwl', 'uzb\\_Latn', 'mww', 'hil', 'lij', 'avk\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'bos\\_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi\\_Latn', 'awa', 'swg', 'zsm\\_Latn', 'zho\\_Hant', 'gcf\\_Latn', 'uzb\\_Cyrl', 'isl', 'lfn\\_Latn', 'shs\\_Latn', 'nov\\_Latn', 'bho', 'ltz', 'lzh', 'kur\\_Latn', 'sun', 'arg', 'pes\\_Thaa', 'sqi', 'uig\\_Arab', 'csb\\_Latn', 'fra', 'hat', 'liv\\_Latn', 'non\\_Latn', 'sco', 'cmn\\_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul\\_Latn', 'amh', 'lfn\\_Cyrl', 'eus', 'fkv\\_Latn', 'tur', 'pus', 'afr', 'brx\\_Latn', 'nya', 'acm', 'ota\\_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho\\_Hans', 'tmw\\_Latn', 'kjh', 'ota\\_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh\\_Hans', 'ara', 'tly\\_Latn', 'lug', 'brx', 'bul', 'bel', 'vol\\_Latn', 'kat', 'gan', 'got\\_Goth', 'vro', 'ext', 'afh\\_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif\\_Latn', 'cjy\\_Hant', 'bre', 'ceb', 'mah', 'nob\\_Hebr', 'crh\\_Latn', 'prg\\_Latn', 'khm', 'ang\\_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze\\_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie\\_Hani', 'arz', 'yue', 'kha', 'san\\_Deva', 'jbo\\_Latn', 'gos', 'hau\\_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig\\_Cyrl', 'fao', 'mnw', 'zho', 'orv\\_Cyrl', 'iba', 'bel\\_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc\\_Grek', 'tpw\\_Latn', 'oci', 'mfe', 'sna', 'kir\\_Cyrl', 'tat\\_Latn', 'gom', 'ido\\_Latn', 'sgs', 'pau', 'tgk\\_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp\\_Latn', 'wuu', 'dtp', 'jbo\\_Cyrl', 'tet', 'bod', 'yue\\_Hans', 'zlm\\_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz\\_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif\\_Latn', 'vie', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina\\_Latn', 'cjy\\_Hans', 'jdt\\_Cyrl', 'gsw', 'glv', 'khm\\_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd\\_Arab', 'arq', 'mri', 'kur\\_Arab', 'por', 'hin', 'shy\\_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue\\_Hant', 'kpv', 'tam', 'est', 'frm\\_Latn', 'hoc\\_Latn', 'bam\\_Latn', 'kek\\_Latn', 'ksh', 'tlh\\_Latn', 'ltg', 'pan\\_Guru', 'hnj\\_Latn', 'cor', 'gle', 'swe', 'lin', 'qya\\_Latn', 'kum', 'mad', 'cmn\\_Hant', 'fuv', 'nau', 'mon', 'akl\\_Latn', 'guj', 'kaz\\_Latn', 'wln', 'tuk\\_Latn', 'jav\\_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws\\_Latn', 'urd', 'stq', 'tat\\_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl\\_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld\\_Latn', 'tzl\\_Latn', 'mdf', 'ike\\_Latn', 'ces', 'ldn\\_Latn', 'egl', 'heb', 'vec', 'zul', 'max\\_Latn', 'pes\\_Latn', 'yid', 'mal', 'nds'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: mul\n* tgt\\_alpha3: eng\n* short\\_pair: mul-en\n* chrF2\\_score: 0.518\n* bleu: 34.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 72346.0\n* src\\_name: Multiple languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: mul\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: mul-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #es #os #eo #ro #fy #cy #is #lb #su #an #sq #fr #ht #rm #cv #ig #am #eu #tr #ps #af #ny #ch #uk #sl #lt #tk #sg #ar #lg #bg #be #ka #gd #ja #si #br #mh #km #th #ty #rw #te #mk #or #wo #kl #mr #ru #yo #hu #fo #zh #ti #co #ee #oc #sn #mt #ts #pl #gl #nb #bn #tt #bo #lo #id #gn #nv #hy #kn #to #io #so #vi #da #fj #gv #sm #nl #mi #pt #hi #se #as #ta #et #kw #ga #sv #ln #na #mn #gu #wa #lv #jv #el #my #ba #it #hr #ur #ce #nn #fi #mg #rn #xh #ab #de #cs #he #zu #yi #ml #mul #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### mul-eng\n\n\n* source group: Multiple languages\n* target group: English\n* OPUS readme: mul-eng\n* model: transformer\n* source language(s): abk acm ady afb afh\\_Latn afr akl\\_Latn aln amh ang\\_Latn apc ara arg arq ary arz asm ast avk\\_Latn awa aze\\_Latn bak bam\\_Latn bel bel\\_Latn ben bho bod bos\\_Latn bre brx brx\\_Latn bul bul\\_Latn cat ceb ces cha che chr chv cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant cor cos crh crh\\_Latn csb\\_Latn cym dan deu dsb dtp dws\\_Latn egl ell enm\\_Latn epo est eus ewe ext fao fij fin fkv\\_Latn fra frm\\_Latn frr fry fuc fuv gan gcf\\_Latn gil gla gle glg glv gom gos got\\_Goth grc\\_Grek grn gsw guj hat hau\\_Latn haw heb hif\\_Latn hil hin hnj\\_Latn hoc hoc\\_Latn hrv hsb hun hye iba ibo ido ido\\_Latn ike\\_Latn ile\\_Latn ilo ina\\_Latn ind isl ita izh jav jav\\_Java jbo jbo\\_Cyrl jbo\\_Latn jdt\\_Cyrl jpn kab kal kan kat kaz\\_Cyrl kaz\\_Latn kek\\_Latn kha khm khm\\_Latn kin kir\\_Cyrl kjh kpv krl ksh kum kur\\_Arab kur\\_Latn lad lad\\_Latn lao lat\\_Latn lav ldn\\_Latn lfn\\_Cyrl lfn\\_Latn lij lin lit liv\\_Latn lkt lld\\_Latn lmo ltg ltz lug lzh lzh\\_Hans mad mah mai mal mar max\\_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob\\_Hebr nog non\\_Latn nov\\_Latn npi nya oci ori orv\\_Cyrl oss ota\\_Arab ota\\_Latn pag pan\\_Guru pap pau pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por ppl\\_Latn prg\\_Latn pus quc qya qya\\_Latn rap rif\\_Latn roh rom ron rue run rus sag sah san\\_Deva scn sco sgs shs\\_Latn shy\\_Latn sin sjn\\_Latn slv sma sme smo sna snd\\_Arab som spa sqi srp\\_Cyrl srp\\_Latn stq sun swe swg swh tah tam tat tat\\_Arab tat\\_Latn tel tet tgk\\_Cyrl tha tir tlh\\_Latn tly\\_Latn tmw\\_Latn toi\\_Latn ton tpw\\_Latn tso tuk tuk\\_Latn tur tvl tyv tzl tzl\\_Latn udm uig\\_Arab uig\\_Cyrl ukr umb urd uzb\\_Cyrl uzb\\_Latn vec vie vie\\_Hani vol\\_Latn vro war wln wol wuu xal xho yid yor yue yue\\_Hans yue\\_Hant zho zho\\_Hans zho\\_Hant zlm\\_Latn zsm\\_Latn zul zza\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.5, chr-F: 0.341\ntestset: URL, BLEU: 16.8, chr-F: 0.441\ntestset: URL, BLEU: 31.3, chr-F: 0.580\ntestset: URL, BLEU: 16.4, chr-F: 0.422\ntestset: URL, BLEU: 21.3, chr-F: 0.502\ntestset: URL, BLEU: 12.7, chr-F: 0.409\ntestset: URL, BLEU: 19.8, chr-F: 0.467\ntestset: URL, BLEU: 13.3, chr-F: 0.385\ntestset: URL, BLEU: 19.9, chr-F: 0.482\ntestset: URL, BLEU: 26.7, chr-F: 0.520\ntestset: URL, BLEU: 29.8, chr-F: 0.541\ntestset: URL, BLEU: 21.1, chr-F: 0.487\ntestset: URL, BLEU: 22.6, chr-F: 0.499\ntestset: URL, BLEU: 25.8, chr-F: 0.530\ntestset: URL, BLEU: 15.1, chr-F: 0.430\ntestset: URL, BLEU: 29.4, chr-F: 0.555\ntestset: URL, BLEU: 26.1, chr-F: 0.534\ntestset: URL, BLEU: 21.6, chr-F: 0.491\ntestset: URL, BLEU: 22.3, chr-F: 0.502\ntestset: URL, BLEU: 23.6, chr-F: 0.514\ntestset: URL, BLEU: 19.8, chr-F: 0.480\ntestset: URL, BLEU: 20.9, chr-F: 0.487\ntestset: URL, BLEU: 25.0, chr-F: 0.523\ntestset: URL, BLEU: 14.7, chr-F: 0.425\ntestset: URL, BLEU: 27.6, chr-F: 0.542\ntestset: URL, BLEU: 25.7, chr-F: 0.530\ntestset: URL, BLEU: 20.6, chr-F: 0.491\ntestset: URL, BLEU: 23.4, chr-F: 0.517\ntestset: URL, BLEU: 26.1, chr-F: 0.537\ntestset: URL, BLEU: 29.1, chr-F: 0.561\ntestset: URL, BLEU: 21.0, chr-F: 0.489\ntestset: URL, BLEU: 21.3, chr-F: 0.494\ntestset: URL, BLEU: 26.8, chr-F: 0.546\ntestset: URL, BLEU: 28.2, chr-F: 0.549\ntestset: URL, BLEU: 20.5, chr-F: 0.485\ntestset: URL, BLEU: 22.3, chr-F: 0.503\ntestset: URL, BLEU: 27.5, chr-F: 0.545\ntestset: URL, BLEU: 26.6, chr-F: 0.532\ntestset: URL, BLEU: 30.3, chr-F: 0.567\ntestset: URL, BLEU: 22.5, chr-F: 0.498\ntestset: URL, BLEU: 25.0, chr-F: 0.518\ntestset: URL, BLEU: 27.4, chr-F: 0.537\ntestset: URL, BLEU: 21.6, chr-F: 0.484\ntestset: URL, BLEU: 28.4, chr-F: 0.555\ntestset: URL, BLEU: 24.0, chr-F: 0.517\ntestset: URL, BLEU: 24.1, chr-F: 0.511\ntestset: URL, BLEU: 29.1, chr-F: 0.563\ntestset: URL, BLEU: 14.0, chr-F: 0.414\ntestset: URL, BLEU: 24.0, chr-F: 0.521\ntestset: URL, BLEU: 21.9, chr-F: 0.481\ntestset: URL, BLEU: 25.5, chr-F: 0.519\ntestset: URL, BLEU: 17.4, chr-F: 0.441\ntestset: URL, BLEU: 22.4, chr-F: 0.494\ntestset: URL, BLEU: 23.0, chr-F: 0.500\ntestset: URL, BLEU: 30.1, chr-F: 0.560\ntestset: URL, BLEU: 18.5, chr-F: 0.461\ntestset: URL, BLEU: 29.6, chr-F: 0.562\ntestset: URL, BLEU: 22.0, chr-F: 0.495\ntestset: URL, BLEU: 14.8, chr-F: 0.415\ntestset: URL, BLEU: 20.2, chr-F: 0.475\ntestset: URL, BLEU: 26.0, chr-F: 0.523\ntestset: URL, BLEU: 19.6, chr-F: 0.465\ntestset: URL, BLEU: 16.2, chr-F: 0.454\ntestset: URL, BLEU: 24.2, chr-F: 0.510\ntestset: URL, BLEU: 15.0, chr-F: 0.412\ntestset: URL, BLEU: 13.7, chr-F: 0.412\ntestset: URL, BLEU: 21.2, chr-F: 0.486\ntestset: URL, BLEU: 31.5, chr-F: 0.564\ntestset: URL, BLEU: 19.7, chr-F: 0.473\ntestset: URL, BLEU: 15.1, chr-F: 0.418\ntestset: URL, BLEU: 21.3, chr-F: 0.490\ntestset: URL, BLEU: 15.4, chr-F: 0.421\ntestset: URL, BLEU: 12.9, chr-F: 0.408\ntestset: URL, BLEU: 27.0, chr-F: 0.529\ntestset: URL, BLEU: 17.2, chr-F: 0.438\ntestset: URL, BLEU: 9.0, chr-F: 0.342\ntestset: URL, BLEU: 22.6, chr-F: 0.512\ntestset: URL, BLEU: 24.1, chr-F: 0.503\ntestset: URL, BLEU: 13.9, chr-F: 0.427\ntestset: URL, BLEU: 15.2, chr-F: 0.428\ntestset: URL, BLEU: 16.8, chr-F: 0.442\ntestset: URL, BLEU: 16.8, chr-F: 0.442\ntestset: URL, BLEU: 2.4, chr-F: 0.190\ntestset: URL, BLEU: 1.1, chr-F: 0.111\ntestset: URL, BLEU: 1.7, chr-F: 0.108\ntestset: URL, BLEU: 53.0, chr-F: 0.672\ntestset: URL, BLEU: 5.9, chr-F: 0.239\ntestset: URL, BLEU: 25.6, chr-F: 0.464\ntestset: URL, BLEU: 11.7, chr-F: 0.289\ntestset: URL, BLEU: 26.4, chr-F: 0.443\ntestset: URL, BLEU: 35.9, chr-F: 0.473\ntestset: URL, BLEU: 19.8, chr-F: 0.365\ntestset: URL, BLEU: 31.8, chr-F: 0.467\ntestset: URL, BLEU: 0.4, chr-F: 0.119\ntestset: URL, BLEU: 9.7, chr-F: 0.271\ntestset: URL, BLEU: 37.0, chr-F: 0.542\ntestset: URL, BLEU: 13.9, chr-F: 0.395\ntestset: URL, BLEU: 2.2, chr-F: 0.094\ntestset: URL, BLEU: 36.8, chr-F: 0.549\ntestset: URL, BLEU: 39.7, chr-F: 0.546\ntestset: URL, BLEU: 33.6, chr-F: 0.540\ntestset: URL, BLEU: 1.1, chr-F: 0.147\ntestset: URL, BLEU: 14.2, chr-F: 0.303\ntestset: URL, BLEU: 1.7, chr-F: 0.130\ntestset: URL, BLEU: 46.0, chr-F: 0.621\ntestset: URL, BLEU: 46.6, chr-F: 0.636\ntestset: URL, BLEU: 17.4, chr-F: 0.347\ntestset: URL, BLEU: 41.3, chr-F: 0.586\ntestset: URL, BLEU: 7.9, chr-F: 0.232\ntestset: URL, BLEU: 0.7, chr-F: 0.104\ntestset: URL, BLEU: 7.3, chr-F: 0.261\ntestset: URL, BLEU: 8.8, chr-F: 0.244\ntestset: URL, BLEU: 11.0, chr-F: 0.319\ntestset: URL, BLEU: 5.4, chr-F: 0.204\ntestset: URL, BLEU: 58.2, chr-F: 0.643\ntestset: URL, BLEU: 26.3, chr-F: 0.399\ntestset: URL, BLEU: 18.8, chr-F: 0.389\ntestset: URL, BLEU: 23.4, chr-F: 0.407\ntestset: URL, BLEU: 50.5, chr-F: 0.659\ntestset: URL, BLEU: 39.6, chr-F: 0.579\ntestset: URL, BLEU: 24.3, chr-F: 0.449\ntestset: URL, BLEU: 1.0, chr-F: 0.149\ntestset: URL, BLEU: 1.6, chr-F: 0.061\ntestset: URL, BLEU: 7.6, chr-F: 0.236\ntestset: URL, BLEU: 55.4, chr-F: 0.682\ntestset: URL, BLEU: 28.0, chr-F: 0.489\ntestset: URL, BLEU: 41.8, chr-F: 0.591\ntestset: URL, BLEU: 41.5, chr-F: 0.581\ntestset: URL, BLEU: 37.8, chr-F: 0.557\ntestset: URL, BLEU: 10.7, chr-F: 0.262\ntestset: URL, BLEU: 25.5, chr-F: 0.405\ntestset: URL, BLEU: 28.7, chr-F: 0.469\ntestset: URL, BLEU: 7.5, chr-F: 0.281\ntestset: URL, BLEU: 24.2, chr-F: 0.320\ntestset: URL, BLEU: 35.8, chr-F: 0.534\ntestset: URL, BLEU: 15.5, chr-F: 0.434\ntestset: URL, BLEU: 45.1, chr-F: 0.618\ntestset: URL, BLEU: 29.6, chr-F: 0.427\ntestset: URL, BLEU: 5.5, chr-F: 0.138\ntestset: URL, BLEU: 25.3, chr-F: 0.455\ntestset: URL, BLEU: 1.1, chr-F: 0.127\ntestset: URL, BLEU: 16.0, chr-F: 0.315\ntestset: URL, BLEU: 46.7, chr-F: 0.587\ntestset: URL, BLEU: 20.2, chr-F: 0.358\ntestset: URL, BLEU: 43.9, chr-F: 0.592\ntestset: URL, BLEU: 45.1, chr-F: 0.623\ntestset: URL, BLEU: 3.3, chr-F: 0.119\ntestset: URL, BLEU: 20.1, chr-F: 0.364\ntestset: URL, BLEU: 0.1, chr-F: 0.041\ntestset: URL, BLEU: 2.1, chr-F: 0.137\ntestset: URL, BLEU: 1.7, chr-F: 0.152\ntestset: URL, BLEU: 18.2, chr-F: 0.334\ntestset: URL, BLEU: 21.7, chr-F: 0.373\ntestset: URL, BLEU: 34.5, chr-F: 0.502\ntestset: URL, BLEU: 10.5, chr-F: 0.295\ntestset: URL, BLEU: 2.8, chr-F: 0.160\ntestset: URL, BLEU: 46.7, chr-F: 0.623\ntestset: URL, BLEU: 33.0, chr-F: 0.492\ntestset: URL, BLEU: 17.0, chr-F: 0.391\ntestset: URL, BLEU: 16.0, chr-F: 0.339\ntestset: URL, BLEU: 36.4, chr-F: 0.533\ntestset: URL, BLEU: 0.4, chr-F: 0.131\ntestset: URL, BLEU: 0.7, chr-F: 0.132\ntestset: URL, BLEU: 41.9, chr-F: 0.551\ntestset: URL, BLEU: 33.2, chr-F: 0.510\ntestset: URL, BLEU: 32.2, chr-F: 0.487\ntestset: URL, BLEU: 9.4, chr-F: 0.278\ntestset: URL, BLEU: 5.8, chr-F: 0.200\ntestset: URL, BLEU: 31.7, chr-F: 0.503\ntestset: URL, BLEU: 9.1, chr-F: 0.164\ntestset: URL, BLEU: 42.2, chr-F: 0.595\ntestset: URL, BLEU: 29.7, chr-F: 0.485\ntestset: URL, BLEU: 42.1, chr-F: 0.607\ntestset: URL, BLEU: 35.7, chr-F: 0.527\ntestset: URL, BLEU: 54.8, chr-F: 0.686\ntestset: URL, BLEU: 28.3, chr-F: 0.526\ntestset: URL, BLEU: 10.0, chr-F: 0.282\ntestset: URL, BLEU: 0.3, chr-F: 0.115\ntestset: URL, BLEU: 5.3, chr-F: 0.140\ntestset: URL, BLEU: 18.8, chr-F: 0.387\ntestset: URL, BLEU: 3.9, chr-F: 0.205\ntestset: URL, BLEU: 16.9, chr-F: 0.329\ntestset: URL, BLEU: 16.2, chr-F: 0.374\ntestset: URL, BLEU: 31.1, chr-F: 0.493\ntestset: URL, BLEU: 24.5, chr-F: 0.437\ntestset: URL, BLEU: 7.4, chr-F: 0.192\ntestset: URL, BLEU: 1.0, chr-F: 0.154\ntestset: URL, BLEU: 12.2, chr-F: 0.290\ntestset: URL, BLEU: 22.5, chr-F: 0.355\ntestset: URL, BLEU: 27.2, chr-F: 0.470\ntestset: URL, BLEU: 2.1, chr-F: 0.129\ntestset: URL, BLEU: 4.5, chr-F: 0.259\ntestset: URL, BLEU: 1.4, chr-F: 0.099\ntestset: URL, BLEU: 26.1, chr-F: 0.387\ntestset: URL, BLEU: 5.5, chr-F: 0.256\ntestset: URL, BLEU: 9.3, chr-F: 0.288\ntestset: URL, BLEU: 9.6, chr-F: 0.208\ntestset: URL, BLEU: 30.1, chr-F: 0.475\ntestset: URL, BLEU: 11.6, chr-F: 0.284\ntestset: URL, BLEU: 4.5, chr-F: 0.214\ntestset: URL, BLEU: 21.5, chr-F: 0.402\ntestset: URL, BLEU: 40.2, chr-F: 0.577\ntestset: URL, BLEU: 0.8, chr-F: 0.115\ntestset: URL, BLEU: 23.0, chr-F: 0.433\ntestset: URL, BLEU: 9.3, chr-F: 0.287\ntestset: URL, BLEU: 2.4, chr-F: 0.196\ntestset: URL, BLEU: 44.0, chr-F: 0.597\ntestset: URL, BLEU: 1.6, chr-F: 0.115\ntestset: URL, BLEU: 2.0, chr-F: 0.113\ntestset: URL, BLEU: 18.3, chr-F: 0.312\ntestset: URL, BLEU: 25.4, chr-F: 0.395\ntestset: URL, BLEU: 35.9, chr-F: 0.509\ntestset: URL, BLEU: 5.1, chr-F: 0.357\ntestset: URL, BLEU: 2.8, chr-F: 0.123\ntestset: URL, BLEU: 5.7, chr-F: 0.175\ntestset: URL, BLEU: 56.3, chr-F: 0.703\ntestset: URL, BLEU: 37.5, chr-F: 0.534\ntestset: URL, BLEU: 22.8, chr-F: 0.470\ntestset: URL, BLEU: 2.0, chr-F: 0.110\ntestset: URL, BLEU: 59.2, chr-F: 0.764\ntestset: URL, BLEU: 9.0, chr-F: 0.199\ntestset: URL, BLEU: 44.3, chr-F: 0.593\ntestset: URL, BLEU: 31.9, chr-F: 0.424\ntestset: URL, BLEU: 38.6, chr-F: 0.540\ntestset: URL, BLEU: 2.5, chr-F: 0.101\ntestset: URL, BLEU: 0.3, chr-F: 0.110\ntestset: URL, BLEU: 13.5, chr-F: 0.334\ntestset: URL, BLEU: 8.5, chr-F: 0.260\ntestset: URL, BLEU: 33.9, chr-F: 0.520\ntestset: URL, BLEU: 34.7, chr-F: 0.518\ntestset: URL, BLEU: 37.4, chr-F: 0.630\ntestset: URL, BLEU: 15.5, chr-F: 0.335\ntestset: URL, BLEU: 0.8, chr-F: 0.118\ntestset: URL, BLEU: 9.0, chr-F: 0.186\ntestset: URL, BLEU: 1.3, chr-F: 0.144\ntestset: URL, BLEU: 30.7, chr-F: 0.495\ntestset: URL, BLEU: 3.5, chr-F: 0.168\ntestset: URL, BLEU: 42.7, chr-F: 0.492\ntestset: URL, BLEU: 47.9, chr-F: 0.640\ntestset: URL, BLEU: 12.7, chr-F: 0.284\ntestset: URL, BLEU: 43.8, chr-F: 0.586\ntestset: URL, BLEU: 45.5, chr-F: 0.619\ntestset: URL, BLEU: 26.9, chr-F: 0.472\ntestset: URL, BLEU: 33.2, chr-F: 0.456\ntestset: URL, BLEU: 17.9, chr-F: 0.370\ntestset: URL, BLEU: 14.6, chr-F: 0.305\ntestset: URL, BLEU: 11.0, chr-F: 0.283\ntestset: URL, BLEU: 4.1, chr-F: 0.211\ntestset: URL, BLEU: 4.1, chr-F: 0.216\ntestset: URL, BLEU: 24.3, chr-F: 0.468\ntestset: URL, BLEU: 16.4, chr-F: 0.358\ntestset: URL, BLEU: 53.2, chr-F: 0.628\ntestset: URL, BLEU: 3.7, chr-F: 0.173\ntestset: URL, BLEU: 45.3, chr-F: 0.569\ntestset: URL, BLEU: 14.0, chr-F: 0.345\ntestset: URL, BLEU: 41.7, chr-F: 0.588\ntestset: URL, BLEU: 51.4, chr-F: 0.669\ntestset: URL, BLEU: 0.4, chr-F: 0.134\ntestset: URL, BLEU: 4.1, chr-F: 0.198\ntestset: URL, BLEU: 6.7, chr-F: 0.233\ntestset: URL, BLEU: 3.5, chr-F: 0.091\ntestset: URL, BLEU: 0.2, chr-F: 0.090\ntestset: URL, BLEU: 17.5, chr-F: 0.230\ntestset: URL, BLEU: 4.2, chr-F: 0.164\ntestset: URL, BLEU: 24.6, chr-F: 0.464\ntestset: URL, BLEU: 3.4, chr-F: 0.212\ntestset: URL, BLEU: 45.2, chr-F: 0.620\ntestset: URL, BLEU: 21.4, chr-F: 0.390\ntestset: URL, BLEU: 24.5, chr-F: 0.392\ntestset: URL, BLEU: 42.7, chr-F: 0.591\ntestset: URL, BLEU: 3.4, chr-F: 0.187\ntestset: URL, BLEU: 5.0, chr-F: 0.177\ntestset: URL, BLEU: 2.0, chr-F: 0.172\ntestset: URL, BLEU: 35.8, chr-F: 0.410\ntestset: URL, BLEU: 34.6, chr-F: 0.520\ntestset: URL, BLEU: 21.8, chr-F: 0.299\ntestset: URL, BLEU: 1.8, chr-F: 0.122\ntestset: URL, BLEU: 1.4, chr-F: 0.104\ntestset: URL, BLEU: 20.6, chr-F: 0.429\ntestset: URL, BLEU: 1.2, chr-F: 0.095\ntestset: URL, BLEU: 37.0, chr-F: 0.545\ntestset: URL, BLEU: 4.4, chr-F: 0.147\ntestset: URL, BLEU: 8.9, chr-F: 0.229\ntestset: URL, BLEU: 37.7, chr-F: 0.483\ntestset: URL, BLEU: 18.0, chr-F: 0.359\ntestset: URL, BLEU: 28.1, chr-F: 0.444\ntestset: URL, BLEU: 23.6, chr-F: 0.472\ntestset: URL, BLEU: 47.9, chr-F: 0.645\ntestset: URL, BLEU: 46.9, chr-F: 0.634\ntestset: URL, BLEU: 8.1, chr-F: 0.379\ntestset: URL, BLEU: 23.8, chr-F: 0.369\ntestset: URL, BLEU: 6.5, chr-F: 0.193\ntestset: URL, BLEU: 51.4, chr-F: 0.655\ntestset: URL, BLEU: 18.5, chr-F: 0.342\ntestset: URL, BLEU: 25.6, chr-F: 0.249\ntestset: URL, BLEU: 29.1, chr-F: 0.437\ntestset: URL, BLEU: 12.9, chr-F: 0.327\ntestset: URL, BLEU: 21.2, chr-F: 0.386\ntestset: URL, BLEU: 9.2, chr-F: 0.215\ntestset: URL, BLEU: 12.7, chr-F: 0.374\ntestset: URL, BLEU: 36.3, chr-F: 0.531\ntestset: URL, BLEU: 9.1, chr-F: 0.267\ntestset: URL, BLEU: 0.2, chr-F: 0.084\ntestset: URL, BLEU: 2.1, chr-F: 0.128\ntestset: URL, BLEU: 5.3, chr-F: 0.150\ntestset: URL, BLEU: 39.5, chr-F: 0.473\ntestset: URL, BLEU: 1.5, chr-F: 0.160\ntestset: URL, BLEU: 44.7, chr-F: 0.526\ntestset: URL, BLEU: 18.6, chr-F: 0.401\ntestset: URL, BLEU: 40.5, chr-F: 0.573\ntestset: URL, BLEU: 55.0, chr-F: 0.593\ntestset: URL, BLEU: 19.1, chr-F: 0.477\ntestset: URL, BLEU: 17.7, chr-F: 0.333\ntestset: URL, BLEU: 3.4, chr-F: 0.217\ntestset: URL, BLEU: 11.4, chr-F: 0.289\ntestset: URL, BLEU: 43.1, chr-F: 0.595\ntestset: URL, BLEU: 9.2, chr-F: 0.260\ntestset: URL, BLEU: 23.2, chr-F: 0.426\ntestset: URL, BLEU: 19.0, chr-F: 0.342\ntestset: URL, BLEU: 41.1, chr-F: 0.409\ntestset: URL, BLEU: 30.6, chr-F: 0.481\ntestset: URL, BLEU: 1.8, chr-F: 0.143\ntestset: URL, BLEU: 15.9, chr-F: 0.352\ntestset: URL, BLEU: 12.6, chr-F: 0.291\ntestset: URL, BLEU: 4.4, chr-F: 0.138\ntestset: URL, BLEU: 0.9, chr-F: 0.153\ntestset: URL, BLEU: 35.4, chr-F: 0.513\ntestset: URL, BLEU: 19.4, chr-F: 0.387\ntestset: URL, BLEU: 19.3, chr-F: 0.327\ntestset: URL, BLEU: 25.8, chr-F: 0.448\ntestset: URL, BLEU: 40.9, chr-F: 0.567\ntestset: URL, BLEU: 1.6, chr-F: 0.125", "### System Info:\n\n\n* hf\\_name: mul-eng\n* source\\_languages: mul\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul', 'en']\n* src\\_constituents: {'sjn\\_Latn', 'cat', 'nan', 'spa', 'ile\\_Latn', 'pap', 'mwl', 'uzb\\_Latn', 'mww', 'hil', 'lij', 'avk\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'bos\\_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi\\_Latn', 'awa', 'swg', 'zsm\\_Latn', 'zho\\_Hant', 'gcf\\_Latn', 'uzb\\_Cyrl', 'isl', 'lfn\\_Latn', 'shs\\_Latn', 'nov\\_Latn', 'bho', 'ltz', 'lzh', 'kur\\_Latn', 'sun', 'arg', 'pes\\_Thaa', 'sqi', 'uig\\_Arab', 'csb\\_Latn', 'fra', 'hat', 'liv\\_Latn', 'non\\_Latn', 'sco', 'cmn\\_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul\\_Latn', 'amh', 'lfn\\_Cyrl', 'eus', 'fkv\\_Latn', 'tur', 'pus', 'afr', 'brx\\_Latn', 'nya', 'acm', 'ota\\_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho\\_Hans', 'tmw\\_Latn', 'kjh', 'ota\\_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh\\_Hans', 'ara', 'tly\\_Latn', 'lug', 'brx', 'bul', 'bel', 'vol\\_Latn', 'kat', 'gan', 'got\\_Goth', 'vro', 'ext', 'afh\\_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif\\_Latn', 'cjy\\_Hant', 'bre', 'ceb', 'mah', 'nob\\_Hebr', 'crh\\_Latn', 'prg\\_Latn', 'khm', 'ang\\_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze\\_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie\\_Hani', 'arz', 'yue', 'kha', 'san\\_Deva', 'jbo\\_Latn', 'gos', 'hau\\_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig\\_Cyrl', 'fao', 'mnw', 'zho', 'orv\\_Cyrl', 'iba', 'bel\\_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc\\_Grek', 'tpw\\_Latn', 'oci', 'mfe', 'sna', 'kir\\_Cyrl', 'tat\\_Latn', 'gom', 'ido\\_Latn', 'sgs', 'pau', 'tgk\\_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp\\_Latn', 'wuu', 'dtp', 'jbo\\_Cyrl', 'tet', 'bod', 'yue\\_Hans', 'zlm\\_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz\\_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif\\_Latn', 'vie', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina\\_Latn', 'cjy\\_Hans', 'jdt\\_Cyrl', 'gsw', 'glv', 'khm\\_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd\\_Arab', 'arq', 'mri', 'kur\\_Arab', 'por', 'hin', 'shy\\_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue\\_Hant', 'kpv', 'tam', 'est', 'frm\\_Latn', 'hoc\\_Latn', 'bam\\_Latn', 'kek\\_Latn', 'ksh', 'tlh\\_Latn', 'ltg', 'pan\\_Guru', 'hnj\\_Latn', 'cor', 'gle', 'swe', 'lin', 'qya\\_Latn', 'kum', 'mad', 'cmn\\_Hant', 'fuv', 'nau', 'mon', 'akl\\_Latn', 'guj', 'kaz\\_Latn', 'wln', 'tuk\\_Latn', 'jav\\_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws\\_Latn', 'urd', 'stq', 'tat\\_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl\\_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld\\_Latn', 'tzl\\_Latn', 'mdf', 'ike\\_Latn', 'ces', 'ldn\\_Latn', 'egl', 'heb', 'vec', 'zul', 'max\\_Latn', 'pes\\_Latn', 'yid', 'mal', 'nds'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: mul\n* tgt\\_alpha3: eng\n* short\\_pair: mul-en\n* chrF2\\_score: 0.518\n* bleu: 34.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 72346.0\n* src\\_name: Multiple languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: mul\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: mul-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 315, 8567, 2809 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ca #es #os #eo #ro #fy #cy #is #lb #su #an #sq #fr #ht #rm #cv #ig #am #eu #tr #ps #af #ny #ch #uk #sl #lt #tk #sg #ar #lg #bg #be #ka #gd #ja #si #br #mh #km #th #ty #rw #te #mk #or #wo #kl #mr #ru #yo #hu #fo #zh #ti #co #ee #oc #sn #mt #ts #pl #gl #nb #bn #tt #bo #lo #id #gn #nv #hy #kn #to #io #so #vi #da #fj #gv #sm #nl #mi #pt #hi #se #as #ta #et #kw #ga #sv #ln #na #mn #gu #wa #lv #jv #el #my #ba #it #hr #ur #ce #nn #fi #mg #rn #xh #ab #de #cs #he #zu #yi #ml #mul #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### mul-eng\n\n\n* source group: Multiple languages\n* target group: English\n* OPUS readme: mul-eng\n* model: transformer\n* source language(s): abk acm ady afb afh\\_Latn afr akl\\_Latn aln amh ang\\_Latn apc ara arg arq ary arz asm ast avk\\_Latn awa aze\\_Latn bak bam\\_Latn bel bel\\_Latn ben bho bod bos\\_Latn bre brx brx\\_Latn bul bul\\_Latn cat ceb ces cha che chr chv cjy\\_Hans cjy\\_Hant cmn cmn\\_Hans cmn\\_Hant cor cos crh crh\\_Latn csb\\_Latn cym dan deu dsb dtp dws\\_Latn egl ell enm\\_Latn epo est eus ewe ext fao fij fin fkv\\_Latn fra frm\\_Latn frr fry fuc fuv gan gcf\\_Latn gil gla gle glg glv gom gos got\\_Goth grc\\_Grek grn gsw guj hat hau\\_Latn haw heb hif\\_Latn hil hin hnj\\_Latn hoc hoc\\_Latn hrv hsb hun hye iba ibo ido ido\\_Latn ike\\_Latn ile\\_Latn ilo ina\\_Latn ind isl ita izh jav jav\\_Java jbo jbo\\_Cyrl jbo\\_Latn jdt\\_Cyrl jpn kab kal kan kat kaz\\_Cyrl kaz\\_Latn kek\\_Latn kha khm khm\\_Latn kin kir\\_Cyrl kjh kpv krl ksh kum kur\\_Arab kur\\_Latn lad lad\\_Latn lao lat\\_Latn lav ldn\\_Latn lfn\\_Cyrl lfn\\_Latn lij lin lit liv\\_Latn lkt lld\\_Latn lmo ltg ltz lug lzh lzh\\_Hans mad mah mai mal mar max\\_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob\\_Hebr nog non\\_Latn nov\\_Latn npi nya oci ori orv\\_Cyrl oss ota\\_Arab ota\\_Latn pag pan\\_Guru pap pau pdc pes pes\\_Latn pes\\_Thaa pms pnb pol por ppl\\_Latn prg\\_Latn pus quc qya qya\\_Latn rap rif\\_Latn roh rom ron rue run rus sag sah san\\_Deva scn sco sgs shs\\_Latn shy\\_Latn sin sjn\\_Latn slv sma sme smo sna snd\\_Arab som spa sqi srp\\_Cyrl srp\\_Latn stq sun swe swg swh tah tam tat tat\\_Arab tat\\_Latn tel tet tgk\\_Cyrl tha tir tlh\\_Latn tly\\_Latn tmw\\_Latn toi\\_Latn ton tpw\\_Latn tso tuk tuk\\_Latn tur tvl tyv tzl tzl\\_Latn udm uig\\_Arab uig\\_Cyrl ukr umb urd uzb\\_Cyrl uzb\\_Latn vec vie vie\\_Hani vol\\_Latn vro war wln wol wuu xal xho yid yor yue yue\\_Hans yue\\_Hant zho zho\\_Hans zho\\_Hant zlm\\_Latn zsm\\_Latn zul zza\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 8.5, chr-F: 0.341\ntestset: URL, BLEU: 16.8, chr-F: 0.441\ntestset: URL, BLEU: 31.3, chr-F: 0.580\ntestset: URL, BLEU: 16.4, chr-F: 0.422\ntestset: URL, BLEU: 21.3, chr-F: 0.502\ntestset: URL, BLEU: 12.7, chr-F: 0.409\ntestset: URL, BLEU: 19.8, chr-F: 0.467\ntestset: URL, BLEU: 13.3, chr-F: 0.385\ntestset: URL, BLEU: 19.9, chr-F: 0.482\ntestset: URL, BLEU: 26.7, chr-F: 0.520\ntestset: URL, BLEU: 29.8, chr-F: 0.541\ntestset: URL, BLEU: 21.1, chr-F: 0.487\ntestset: URL, BLEU: 22.6, chr-F: 0.499\ntestset: URL, BLEU: 25.8, chr-F: 0.530\ntestset: URL, BLEU: 15.1, chr-F: 0.430\ntestset: URL, BLEU: 29.4, chr-F: 0.555\ntestset: URL, BLEU: 26.1, chr-F: 0.534\ntestset: URL, BLEU: 21.6, chr-F: 0.491\ntestset: URL, BLEU: 22.3, chr-F: 0.502\ntestset: URL, BLEU: 23.6, chr-F: 0.514\ntestset: URL, BLEU: 19.8, chr-F: 0.480\ntestset: URL, BLEU: 20.9, chr-F: 0.487\ntestset: URL, BLEU: 25.0, chr-F: 0.523\ntestset: URL, BLEU: 14.7, chr-F: 0.425\ntestset: URL, BLEU: 27.6, chr-F: 0.542\ntestset: URL, BLEU: 25.7, chr-F: 0.530\ntestset: URL, BLEU: 20.6, chr-F: 0.491\ntestset: URL, BLEU: 23.4, chr-F: 0.517\ntestset: URL, BLEU: 26.1, chr-F: 0.537\ntestset: URL, BLEU: 29.1, chr-F: 0.561\ntestset: URL, BLEU: 21.0, chr-F: 0.489\ntestset: URL, BLEU: 21.3, chr-F: 0.494\ntestset: URL, BLEU: 26.8, chr-F: 0.546\ntestset: URL, BLEU: 28.2, chr-F: 0.549\ntestset: URL, BLEU: 20.5, chr-F: 0.485\ntestset: URL, BLEU: 22.3, chr-F: 0.503\ntestset: URL, BLEU: 27.5, chr-F: 0.545\ntestset: URL, BLEU: 26.6, chr-F: 0.532\ntestset: URL, BLEU: 30.3, chr-F: 0.567\ntestset: URL, BLEU: 22.5, chr-F: 0.498\ntestset: URL, BLEU: 25.0, chr-F: 0.518\ntestset: URL, BLEU: 27.4, chr-F: 0.537\ntestset: URL, BLEU: 21.6, chr-F: 0.484\ntestset: URL, BLEU: 28.4, chr-F: 0.555\ntestset: URL, BLEU: 24.0, chr-F: 0.517\ntestset: URL, BLEU: 24.1, chr-F: 0.511\ntestset: URL, BLEU: 29.1, chr-F: 0.563\ntestset: URL, BLEU: 14.0, chr-F: 0.414\ntestset: URL, BLEU: 24.0, chr-F: 0.521\ntestset: URL, BLEU: 21.9, chr-F: 0.481\ntestset: URL, BLEU: 25.5, chr-F: 0.519\ntestset: URL, BLEU: 17.4, chr-F: 0.441\ntestset: URL, BLEU: 22.4, chr-F: 0.494\ntestset: URL, BLEU: 23.0, chr-F: 0.500\ntestset: URL, BLEU: 30.1, chr-F: 0.560\ntestset: URL, BLEU: 18.5, chr-F: 0.461\ntestset: URL, BLEU: 29.6, chr-F: 0.562\ntestset: URL, BLEU: 22.0, chr-F: 0.495\ntestset: URL, BLEU: 14.8, chr-F: 0.415\ntestset: URL, BLEU: 20.2, chr-F: 0.475\ntestset: URL, BLEU: 26.0, chr-F: 0.523\ntestset: URL, BLEU: 19.6, chr-F: 0.465\ntestset: URL, BLEU: 16.2, chr-F: 0.454\ntestset: URL, BLEU: 24.2, chr-F: 0.510\ntestset: URL, BLEU: 15.0, chr-F: 0.412\ntestset: URL, BLEU: 13.7, chr-F: 0.412\ntestset: URL, BLEU: 21.2, chr-F: 0.486\ntestset: URL, BLEU: 31.5, chr-F: 0.564\ntestset: URL, BLEU: 19.7, chr-F: 0.473\ntestset: URL, BLEU: 15.1, chr-F: 0.418\ntestset: URL, BLEU: 21.3, chr-F: 0.490\ntestset: URL, BLEU: 15.4, chr-F: 0.421\ntestset: URL, BLEU: 12.9, chr-F: 0.408\ntestset: URL, BLEU: 27.0, chr-F: 0.529\ntestset: URL, BLEU: 17.2, chr-F: 0.438\ntestset: URL, BLEU: 9.0, chr-F: 0.342\ntestset: URL, BLEU: 22.6, chr-F: 0.512\ntestset: URL, BLEU: 24.1, chr-F: 0.503\ntestset: URL, BLEU: 13.9, chr-F: 0.427\ntestset: URL, BLEU: 15.2, chr-F: 0.428\ntestset: URL, BLEU: 16.8, chr-F: 0.442\ntestset: URL, BLEU: 16.8, chr-F: 0.442\ntestset: URL, BLEU: 2.4, chr-F: 0.190\ntestset: URL, BLEU: 1.1, chr-F: 0.111\ntestset: URL, BLEU: 1.7, chr-F: 0.108\ntestset: URL, BLEU: 53.0, chr-F: 0.672\ntestset: URL, BLEU: 5.9, chr-F: 0.239\ntestset: URL, BLEU: 25.6, chr-F: 0.464\ntestset: URL, BLEU: 11.7, chr-F: 0.289\ntestset: URL, BLEU: 26.4, chr-F: 0.443\ntestset: URL, BLEU: 35.9, chr-F: 0.473\ntestset: URL, BLEU: 19.8, chr-F: 0.365\ntestset: URL, BLEU: 31.8, chr-F: 0.467\ntestset: URL, BLEU: 0.4, chr-F: 0.119\ntestset: URL, BLEU: 9.7, chr-F: 0.271\ntestset: URL, BLEU: 37.0, chr-F: 0.542\ntestset: URL, BLEU: 13.9, chr-F: 0.395\ntestset: URL, BLEU: 2.2, chr-F: 0.094\ntestset: URL, BLEU: 36.8, chr-F: 0.549\ntestset: URL, BLEU: 39.7, chr-F: 0.546\ntestset: URL, BLEU: 33.6, chr-F: 0.540\ntestset: URL, BLEU: 1.1, chr-F: 0.147\ntestset: URL, BLEU: 14.2, chr-F: 0.303\ntestset: URL, BLEU: 1.7, chr-F: 0.130\ntestset: URL, BLEU: 46.0, chr-F: 0.621\ntestset: URL, BLEU: 46.6, chr-F: 0.636\ntestset: URL, BLEU: 17.4, chr-F: 0.347\ntestset: URL, BLEU: 41.3, chr-F: 0.586\ntestset: URL, BLEU: 7.9, chr-F: 0.232\ntestset: URL, BLEU: 0.7, chr-F: 0.104\ntestset: URL, BLEU: 7.3, chr-F: 0.261\ntestset: URL, BLEU: 8.8, chr-F: 0.244\ntestset: URL, BLEU: 11.0, chr-F: 0.319\ntestset: URL, BLEU: 5.4, chr-F: 0.204\ntestset: URL, BLEU: 58.2, chr-F: 0.643\ntestset: URL, BLEU: 26.3, chr-F: 0.399\ntestset: URL, BLEU: 18.8, chr-F: 0.389\ntestset: URL, BLEU: 23.4, chr-F: 0.407\ntestset: URL, BLEU: 50.5, chr-F: 0.659\ntestset: URL, BLEU: 39.6, chr-F: 0.579\ntestset: URL, BLEU: 24.3, chr-F: 0.449\ntestset: URL, BLEU: 1.0, chr-F: 0.149\ntestset: URL, BLEU: 1.6, chr-F: 0.061\ntestset: URL, BLEU: 7.6, chr-F: 0.236\ntestset: URL, BLEU: 55.4, chr-F: 0.682\ntestset: URL, BLEU: 28.0, chr-F: 0.489\ntestset: URL, BLEU: 41.8, chr-F: 0.591\ntestset: URL, BLEU: 41.5, chr-F: 0.581\ntestset: URL, BLEU: 37.8, chr-F: 0.557\ntestset: URL, BLEU: 10.7, chr-F: 0.262\ntestset: URL, BLEU: 25.5, chr-F: 0.405\ntestset: URL, BLEU: 28.7, chr-F: 0.469\ntestset: URL, BLEU: 7.5, chr-F: 0.281\ntestset: URL, BLEU: 24.2, chr-F: 0.320\ntestset: URL, BLEU: 35.8, chr-F: 0.534\ntestset: URL, BLEU: 15.5, chr-F: 0.434\ntestset: URL, BLEU: 45.1, chr-F: 0.618\ntestset: URL, BLEU: 29.6, chr-F: 0.427\ntestset: URL, BLEU: 5.5, chr-F: 0.138\ntestset: URL, BLEU: 25.3, chr-F: 0.455\ntestset: URL, BLEU: 1.1, chr-F: 0.127\ntestset: URL, BLEU: 16.0, chr-F: 0.315\ntestset: URL, BLEU: 46.7, chr-F: 0.587\ntestset: URL, BLEU: 20.2, chr-F: 0.358\ntestset: URL, BLEU: 43.9, chr-F: 0.592\ntestset: URL, BLEU: 45.1, chr-F: 0.623\ntestset: URL, BLEU: 3.3, chr-F: 0.119\ntestset: URL, BLEU: 20.1, chr-F: 0.364\ntestset: URL, BLEU: 0.1, chr-F: 0.041\ntestset: URL, BLEU: 2.1, chr-F: 0.137\ntestset: URL, BLEU: 1.7, chr-F: 0.152\ntestset: URL, BLEU: 18.2, chr-F: 0.334\ntestset: URL, BLEU: 21.7, chr-F: 0.373\ntestset: URL, BLEU: 34.5, chr-F: 0.502\ntestset: URL, BLEU: 10.5, chr-F: 0.295\ntestset: URL, BLEU: 2.8, chr-F: 0.160\ntestset: URL, BLEU: 46.7, chr-F: 0.623\ntestset: URL, BLEU: 33.0, chr-F: 0.492\ntestset: URL, BLEU: 17.0, chr-F: 0.391\ntestset: URL, BLEU: 16.0, chr-F: 0.339\ntestset: URL, BLEU: 36.4, chr-F: 0.533\ntestset: URL, BLEU: 0.4, chr-F: 0.131\ntestset: URL, BLEU: 0.7, chr-F: 0.132\ntestset: URL, BLEU: 41.9, chr-F: 0.551\ntestset: URL, BLEU: 33.2, chr-F: 0.510\ntestset: URL, BLEU: 32.2, chr-F: 0.487\ntestset: URL, BLEU: 9.4, chr-F: 0.278\ntestset: URL, BLEU: 5.8, chr-F: 0.200\ntestset: URL, BLEU: 31.7, chr-F: 0.503\ntestset: URL, BLEU: 9.1, chr-F: 0.164\ntestset: URL, BLEU: 42.2, chr-F: 0.595\ntestset: URL, BLEU: 29.7, chr-F: 0.485\ntestset: URL, BLEU: 42.1, chr-F: 0.607\ntestset: URL, BLEU: 35.7, chr-F: 0.527\ntestset: URL, BLEU: 54.8, chr-F: 0.686\ntestset: URL, BLEU: 28.3, chr-F: 0.526\ntestset: URL, BLEU: 10.0, chr-F: 0.282\ntestset: URL, BLEU: 0.3, chr-F: 0.115\ntestset: URL, BLEU: 5.3, chr-F: 0.140\ntestset: URL, BLEU: 18.8, chr-F: 0.387\ntestset: URL, BLEU: 3.9, chr-F: 0.205\ntestset: URL, BLEU: 16.9, chr-F: 0.329\ntestset: URL, BLEU: 16.2, chr-F: 0.374\ntestset: URL, BLEU: 31.1, chr-F: 0.493\ntestset: URL, BLEU: 24.5, chr-F: 0.437\ntestset: URL, BLEU: 7.4, chr-F: 0.192\ntestset: URL, BLEU: 1.0, chr-F: 0.154\ntestset: URL, BLEU: 12.2, chr-F: 0.290\ntestset: URL, BLEU: 22.5, chr-F: 0.355\ntestset: URL, BLEU: 27.2, chr-F: 0.470\ntestset: URL, BLEU: 2.1, chr-F: 0.129\ntestset: URL, BLEU: 4.5, chr-F: 0.259\ntestset: URL, BLEU: 1.4, chr-F: 0.099\ntestset: URL, BLEU: 26.1, chr-F: 0.387\ntestset: URL, BLEU: 5.5, chr-F: 0.256\ntestset: URL, BLEU: 9.3, chr-F: 0.288\ntestset: URL, BLEU: 9.6, chr-F: 0.208\ntestset: URL, BLEU: 30.1, chr-F: 0.475\ntestset: URL, BLEU: 11.6, chr-F: 0.284\ntestset: URL, BLEU: 4.5, chr-F: 0.214\ntestset: URL, BLEU: 21.5, chr-F: 0.402\ntestset: URL, BLEU: 40.2, chr-F: 0.577\ntestset: URL, BLEU: 0.8, chr-F: 0.115\ntestset: URL, BLEU: 23.0, chr-F: 0.433\ntestset: URL, BLEU: 9.3, chr-F: 0.287\ntestset: URL, BLEU: 2.4, chr-F: 0.196\ntestset: URL, BLEU: 44.0, chr-F: 0.597\ntestset: URL, BLEU: 1.6, chr-F: 0.115\ntestset: URL, BLEU: 2.0, chr-F: 0.113\ntestset: URL, BLEU: 18.3, chr-F: 0.312\ntestset: URL, BLEU: 25.4, chr-F: 0.395\ntestset: URL, BLEU: 35.9, chr-F: 0.509\ntestset: URL, BLEU: 5.1, chr-F: 0.357\ntestset: URL, BLEU: 2.8, chr-F: 0.123\ntestset: URL, BLEU: 5.7, chr-F: 0.175\ntestset: URL, BLEU: 56.3, chr-F: 0.703\ntestset: URL, BLEU: 37.5, chr-F: 0.534\ntestset: URL, BLEU: 22.8, chr-F: 0.470\ntestset: URL, BLEU: 2.0, chr-F: 0.110\ntestset: URL, BLEU: 59.2, chr-F: 0.764\ntestset: URL, BLEU: 9.0, chr-F: 0.199\ntestset: URL, BLEU: 44.3, chr-F: 0.593\ntestset: URL, BLEU: 31.9, chr-F: 0.424\ntestset: URL, BLEU: 38.6, chr-F: 0.540\ntestset: URL, BLEU: 2.5, chr-F: 0.101\ntestset: URL, BLEU: 0.3, chr-F: 0.110\ntestset: URL, BLEU: 13.5, chr-F: 0.334\ntestset: URL, BLEU: 8.5, chr-F: 0.260\ntestset: URL, BLEU: 33.9, chr-F: 0.520\ntestset: URL, BLEU: 34.7, chr-F: 0.518\ntestset: URL, BLEU: 37.4, chr-F: 0.630\ntestset: URL, BLEU: 15.5, chr-F: 0.335\ntestset: URL, BLEU: 0.8, chr-F: 0.118\ntestset: URL, BLEU: 9.0, chr-F: 0.186\ntestset: URL, BLEU: 1.3, chr-F: 0.144\ntestset: URL, BLEU: 30.7, chr-F: 0.495\ntestset: URL, BLEU: 3.5, chr-F: 0.168\ntestset: URL, BLEU: 42.7, chr-F: 0.492\ntestset: URL, BLEU: 47.9, chr-F: 0.640\ntestset: URL, BLEU: 12.7, chr-F: 0.284\ntestset: URL, BLEU: 43.8, chr-F: 0.586\ntestset: URL, BLEU: 45.5, chr-F: 0.619\ntestset: URL, BLEU: 26.9, chr-F: 0.472\ntestset: URL, BLEU: 33.2, chr-F: 0.456\ntestset: URL, BLEU: 17.9, chr-F: 0.370\ntestset: URL, BLEU: 14.6, chr-F: 0.305\ntestset: URL, BLEU: 11.0, chr-F: 0.283\ntestset: URL, BLEU: 4.1, chr-F: 0.211\ntestset: URL, BLEU: 4.1, chr-F: 0.216\ntestset: URL, BLEU: 24.3, chr-F: 0.468\ntestset: URL, BLEU: 16.4, chr-F: 0.358\ntestset: URL, BLEU: 53.2, chr-F: 0.628\ntestset: URL, BLEU: 3.7, chr-F: 0.173\ntestset: URL, BLEU: 45.3, chr-F: 0.569\ntestset: URL, BLEU: 14.0, chr-F: 0.345\ntestset: URL, BLEU: 41.7, chr-F: 0.588\ntestset: URL, BLEU: 51.4, chr-F: 0.669\ntestset: URL, BLEU: 0.4, chr-F: 0.134\ntestset: URL, BLEU: 4.1, chr-F: 0.198\ntestset: URL, BLEU: 6.7, chr-F: 0.233\ntestset: URL, BLEU: 3.5, chr-F: 0.091\ntestset: URL, BLEU: 0.2, chr-F: 0.090\ntestset: URL, BLEU: 17.5, chr-F: 0.230\ntestset: URL, BLEU: 4.2, chr-F: 0.164\ntestset: URL, BLEU: 24.6, chr-F: 0.464\ntestset: URL, BLEU: 3.4, chr-F: 0.212\ntestset: URL, BLEU: 45.2, chr-F: 0.620\ntestset: URL, BLEU: 21.4, chr-F: 0.390\ntestset: URL, BLEU: 24.5, chr-F: 0.392\ntestset: URL, BLEU: 42.7, chr-F: 0.591\ntestset: URL, BLEU: 3.4, chr-F: 0.187\ntestset: URL, BLEU: 5.0, chr-F: 0.177\ntestset: URL, BLEU: 2.0, chr-F: 0.172\ntestset: URL, BLEU: 35.8, chr-F: 0.410\ntestset: URL, BLEU: 34.6, chr-F: 0.520\ntestset: URL, BLEU: 21.8, chr-F: 0.299\ntestset: URL, BLEU: 1.8, chr-F: 0.122\ntestset: URL, BLEU: 1.4, chr-F: 0.104\ntestset: URL, BLEU: 20.6, chr-F: 0.429\ntestset: URL, BLEU: 1.2, chr-F: 0.095\ntestset: URL, BLEU: 37.0, chr-F: 0.545\ntestset: URL, BLEU: 4.4, chr-F: 0.147\ntestset: URL, BLEU: 8.9, chr-F: 0.229\ntestset: URL, BLEU: 37.7, chr-F: 0.483\ntestset: URL, BLEU: 18.0, chr-F: 0.359\ntestset: URL, BLEU: 28.1, chr-F: 0.444\ntestset: URL, BLEU: 23.6, chr-F: 0.472\ntestset: URL, BLEU: 47.9, chr-F: 0.645\ntestset: URL, BLEU: 46.9, chr-F: 0.634\ntestset: URL, BLEU: 8.1, chr-F: 0.379\ntestset: URL, BLEU: 23.8, chr-F: 0.369\ntestset: URL, BLEU: 6.5, chr-F: 0.193\ntestset: URL, BLEU: 51.4, chr-F: 0.655\ntestset: URL, BLEU: 18.5, chr-F: 0.342\ntestset: URL, BLEU: 25.6, chr-F: 0.249\ntestset: URL, BLEU: 29.1, chr-F: 0.437\ntestset: URL, BLEU: 12.9, chr-F: 0.327\ntestset: URL, BLEU: 21.2, chr-F: 0.386\ntestset: URL, BLEU: 9.2, chr-F: 0.215\ntestset: URL, BLEU: 12.7, chr-F: 0.374\ntestset: URL, BLEU: 36.3, chr-F: 0.531\ntestset: URL, BLEU: 9.1, chr-F: 0.267\ntestset: URL, BLEU: 0.2, chr-F: 0.084\ntestset: URL, BLEU: 2.1, chr-F: 0.128\ntestset: URL, BLEU: 5.3, chr-F: 0.150\ntestset: URL, BLEU: 39.5, chr-F: 0.473\ntestset: URL, BLEU: 1.5, chr-F: 0.160\ntestset: URL, BLEU: 44.7, chr-F: 0.526\ntestset: URL, BLEU: 18.6, chr-F: 0.401\ntestset: URL, BLEU: 40.5, chr-F: 0.573\ntestset: URL, BLEU: 55.0, chr-F: 0.593\ntestset: URL, BLEU: 19.1, chr-F: 0.477\ntestset: URL, BLEU: 17.7, chr-F: 0.333\ntestset: URL, BLEU: 3.4, chr-F: 0.217\ntestset: URL, BLEU: 11.4, chr-F: 0.289\ntestset: URL, BLEU: 43.1, chr-F: 0.595\ntestset: URL, BLEU: 9.2, chr-F: 0.260\ntestset: URL, BLEU: 23.2, chr-F: 0.426\ntestset: URL, BLEU: 19.0, chr-F: 0.342\ntestset: URL, BLEU: 41.1, chr-F: 0.409\ntestset: URL, BLEU: 30.6, chr-F: 0.481\ntestset: URL, BLEU: 1.8, chr-F: 0.143\ntestset: URL, BLEU: 15.9, chr-F: 0.352\ntestset: URL, BLEU: 12.6, chr-F: 0.291\ntestset: URL, BLEU: 4.4, chr-F: 0.138\ntestset: URL, BLEU: 0.9, chr-F: 0.153\ntestset: URL, BLEU: 35.4, chr-F: 0.513\ntestset: URL, BLEU: 19.4, chr-F: 0.387\ntestset: URL, BLEU: 19.3, chr-F: 0.327\ntestset: URL, BLEU: 25.8, chr-F: 0.448\ntestset: URL, BLEU: 40.9, chr-F: 0.567\ntestset: URL, BLEU: 1.6, chr-F: 0.125### System Info:\n\n\n* hf\\_name: mul-eng\n* source\\_languages: mul\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul', 'en']\n* src\\_constituents: {'sjn\\_Latn', 'cat', 'nan', 'spa', 'ile\\_Latn', 'pap', 'mwl', 'uzb\\_Latn', 'mww', 'hil', 'lij', 'avk\\_Latn', 'lad\\_Latn', 'lat\\_Latn', 'bos\\_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi\\_Latn', 'awa', 'swg', 'zsm\\_Latn', 'zho\\_Hant', 'gcf\\_Latn', 'uzb\\_Cyrl', 'isl', 'lfn\\_Latn', 'shs\\_Latn', 'nov\\_Latn', 'bho', 'ltz', 'lzh', 'kur\\_Latn', 'sun', 'arg', 'pes\\_Thaa', 'sqi', 'uig\\_Arab', 'csb\\_Latn', 'fra', 'hat', 'liv\\_Latn', 'non\\_Latn', 'sco', 'cmn\\_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul\\_Latn', 'amh', 'lfn\\_Cyrl', 'eus', 'fkv\\_Latn', 'tur', 'pus', 'afr', 'brx\\_Latn', 'nya', 'acm', 'ota\\_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho\\_Hans', 'tmw\\_Latn', 'kjh', 'ota\\_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh\\_Hans', 'ara', 'tly\\_Latn', 'lug', 'brx', 'bul', 'bel', 'vol\\_Latn', 'kat', 'gan', 'got\\_Goth', 'vro', 'ext', 'afh\\_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif\\_Latn', 'cjy\\_Hant', 'bre', 'ceb', 'mah', 'nob\\_Hebr', 'crh\\_Latn', 'prg\\_Latn', 'khm', 'ang\\_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze\\_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie\\_Hani', 'arz', 'yue', 'kha', 'san\\_Deva', 'jbo\\_Latn', 'gos', 'hau\\_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig\\_Cyrl', 'fao', 'mnw', 'zho', 'orv\\_Cyrl', 'iba', 'bel\\_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc\\_Grek', 'tpw\\_Latn', 'oci', 'mfe', 'sna', 'kir\\_Cyrl', 'tat\\_Latn', 'gom', 'ido\\_Latn', 'sgs', 'pau', 'tgk\\_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp\\_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp\\_Latn', 'wuu', 'dtp', 'jbo\\_Cyrl', 'tet', 'bod', 'yue\\_Hans', 'zlm\\_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz\\_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif\\_Latn', 'vie', 'enm\\_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina\\_Latn', 'cjy\\_Hans', 'jdt\\_Cyrl', 'gsw', 'glv', 'khm\\_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd\\_Arab', 'arq', 'mri', 'kur\\_Arab', 'por', 'hin', 'shy\\_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue\\_Hant', 'kpv', 'tam', 'est', 'frm\\_Latn', 'hoc\\_Latn', 'bam\\_Latn', 'kek\\_Latn', 'ksh', 'tlh\\_Latn', 'ltg', 'pan\\_Guru', 'hnj\\_Latn', 'cor', 'gle', 'swe', 'lin', 'qya\\_Latn', 'kum', 'mad', 'cmn\\_Hant', 'fuv', 'nau', 'mon', 'akl\\_Latn', 'guj', 'kaz\\_Latn', 'wln', 'tuk\\_Latn', 'jav\\_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws\\_Latn', 'urd', 'stq', 'tat\\_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl\\_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld\\_Latn', 'tzl\\_Latn', 'mdf', 'ike\\_Latn', 'ces', 'ldn\\_Latn', 'egl', 'heb', 'vec', 'zul', 'max\\_Latn', 'pes\\_Latn', 'yid', 'mal', 'nds'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: mul\n* tgt\\_alpha3: eng\n* short\\_pair: mul-en\n* chrF2\\_score: 0.518\n* bleu: 34.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 72346.0\n* src\\_name: Multiple languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: mul\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: mul-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-ng-en * source languages: ng * target languages: en * OPUS readme: [ng-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ng-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ng.en | 27.3 | 0.443 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ng-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ng", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ng #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ng-en * source languages: ng * target languages: en * OPUS readme: ng-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.3, chr-F: 0.443
[ "### opus-mt-ng-en\n\n\n* source languages: ng\n* target languages: en\n* OPUS readme: ng-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.443" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ng #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ng-en\n\n\n* source languages: ng\n* target languages: en\n* OPUS readme: ng-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.443" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ng #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ng-en\n\n\n* source languages: ng\n* target languages: en\n* OPUS readme: ng-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.443" ]
translation
transformers
### nic-eng * source group: Niger-Kordofanian languages * target group: English * OPUS readme: [nic-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nic-eng/README.md) * model: transformer * source language(s): bam_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.bam-eng.bam.eng | 2.4 | 0.090 | | Tatoeba-test.ewe-eng.ewe.eng | 10.3 | 0.384 | | Tatoeba-test.ful-eng.ful.eng | 1.2 | 0.114 | | Tatoeba-test.ibo-eng.ibo.eng | 7.5 | 0.197 | | Tatoeba-test.kin-eng.kin.eng | 30.7 | 0.481 | | Tatoeba-test.lin-eng.lin.eng | 3.1 | 0.185 | | Tatoeba-test.lug-eng.lug.eng | 3.1 | 0.261 | | Tatoeba-test.multi.eng | 21.3 | 0.377 | | Tatoeba-test.nya-eng.nya.eng | 31.6 | 0.502 | | Tatoeba-test.run-eng.run.eng | 24.9 | 0.420 | | Tatoeba-test.sag-eng.sag.eng | 5.2 | 0.231 | | Tatoeba-test.sna-eng.sna.eng | 20.1 | 0.374 | | Tatoeba-test.swa-eng.swa.eng | 4.6 | 0.191 | | Tatoeba-test.toi-eng.toi.eng | 4.8 | 0.122 | | Tatoeba-test.tso-eng.tso.eng | 100.0 | 1.000 | | Tatoeba-test.umb-eng.umb.eng | 9.0 | 0.246 | | Tatoeba-test.wol-eng.wol.eng | 14.0 | 0.212 | | Tatoeba-test.xho-eng.xho.eng | 38.2 | 0.558 | | Tatoeba-test.yor-eng.yor.eng | 21.2 | 0.364 | | Tatoeba-test.zul-eng.zul.eng | 42.3 | 0.589 | ### System Info: - hf_name: nic-eng - source_languages: nic - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nic-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic', 'en'] - src_constituents: {'bam_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.test.txt - src_alpha3: nic - tgt_alpha3: eng - short_pair: nic-en - chrF2_score: 0.377 - bleu: 21.3 - brevity_penalty: 1.0 - ref_len: 15228.0 - src_name: Niger-Kordofanian languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: nic - tgt_alpha2: en - prefer_old: False - long_pair: nic-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "nic", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nic-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "nic", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "sn", "rw", "wo", "ig", "sg", "ee", "zu", "lg", "ts", "ln", "ny", "yo", "rn", "xh", "nic", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #sn #rw #wo #ig #sg #ee #zu #lg #ts #ln #ny #yo #rn #xh #nic #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nic-eng * source group: Niger-Kordofanian languages * target group: English * OPUS readme: nic-eng * model: transformer * source language(s): bam\_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\_Latn tso umb wol xho yor zul * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 2.4, chr-F: 0.090 testset: URL, BLEU: 10.3, chr-F: 0.384 testset: URL, BLEU: 1.2, chr-F: 0.114 testset: URL, BLEU: 7.5, chr-F: 0.197 testset: URL, BLEU: 30.7, chr-F: 0.481 testset: URL, BLEU: 3.1, chr-F: 0.185 testset: URL, BLEU: 3.1, chr-F: 0.261 testset: URL, BLEU: 21.3, chr-F: 0.377 testset: URL, BLEU: 31.6, chr-F: 0.502 testset: URL, BLEU: 24.9, chr-F: 0.420 testset: URL, BLEU: 5.2, chr-F: 0.231 testset: URL, BLEU: 20.1, chr-F: 0.374 testset: URL, BLEU: 4.6, chr-F: 0.191 testset: URL, BLEU: 4.8, chr-F: 0.122 testset: URL, BLEU: 100.0, chr-F: 1.000 testset: URL, BLEU: 9.0, chr-F: 0.246 testset: URL, BLEU: 14.0, chr-F: 0.212 testset: URL, BLEU: 38.2, chr-F: 0.558 testset: URL, BLEU: 21.2, chr-F: 0.364 testset: URL, BLEU: 42.3, chr-F: 0.589 ### System Info: * hf\_name: nic-eng * source\_languages: nic * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic', 'en'] * src\_constituents: {'bam\_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\_Latn', 'umb'} * tgt\_constituents: {'eng'} * src\_multilingual: True * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nic * tgt\_alpha3: eng * short\_pair: nic-en * chrF2\_score: 0.377 * bleu: 21.3 * brevity\_penalty: 1.0 * ref\_len: 15228.0 * src\_name: Niger-Kordofanian languages * tgt\_name: English * train\_date: 2020-08-01 * src\_alpha2: nic * tgt\_alpha2: en * prefer\_old: False * long\_pair: nic-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nic-eng\n\n\n* source group: Niger-Kordofanian languages\n* target group: English\n* OPUS readme: nic-eng\n* model: transformer\n* source language(s): bam\\_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\\_Latn tso umb wol xho yor zul\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 2.4, chr-F: 0.090\ntestset: URL, BLEU: 10.3, chr-F: 0.384\ntestset: URL, BLEU: 1.2, chr-F: 0.114\ntestset: URL, BLEU: 7.5, chr-F: 0.197\ntestset: URL, BLEU: 30.7, chr-F: 0.481\ntestset: URL, BLEU: 3.1, chr-F: 0.185\ntestset: URL, BLEU: 3.1, chr-F: 0.261\ntestset: URL, BLEU: 21.3, chr-F: 0.377\ntestset: URL, BLEU: 31.6, chr-F: 0.502\ntestset: URL, BLEU: 24.9, chr-F: 0.420\ntestset: URL, BLEU: 5.2, chr-F: 0.231\ntestset: URL, BLEU: 20.1, chr-F: 0.374\ntestset: URL, BLEU: 4.6, chr-F: 0.191\ntestset: URL, BLEU: 4.8, chr-F: 0.122\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 9.0, chr-F: 0.246\ntestset: URL, BLEU: 14.0, chr-F: 0.212\ntestset: URL, BLEU: 38.2, chr-F: 0.558\ntestset: URL, BLEU: 21.2, chr-F: 0.364\ntestset: URL, BLEU: 42.3, chr-F: 0.589", "### System Info:\n\n\n* hf\\_name: nic-eng\n* source\\_languages: nic\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic', 'en']\n* src\\_constituents: {'bam\\_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\\_Latn', 'umb'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nic\n* tgt\\_alpha3: eng\n* short\\_pair: nic-en\n* chrF2\\_score: 0.377\n* bleu: 21.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 15228.0\n* src\\_name: Niger-Kordofanian languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: nic\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: nic-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sn #rw #wo #ig #sg #ee #zu #lg #ts #ln #ny #yo #rn #xh #nic #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nic-eng\n\n\n* source group: Niger-Kordofanian languages\n* target group: English\n* OPUS readme: nic-eng\n* model: transformer\n* source language(s): bam\\_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\\_Latn tso umb wol xho yor zul\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 2.4, chr-F: 0.090\ntestset: URL, BLEU: 10.3, chr-F: 0.384\ntestset: URL, BLEU: 1.2, chr-F: 0.114\ntestset: URL, BLEU: 7.5, chr-F: 0.197\ntestset: URL, BLEU: 30.7, chr-F: 0.481\ntestset: URL, BLEU: 3.1, chr-F: 0.185\ntestset: URL, BLEU: 3.1, chr-F: 0.261\ntestset: URL, BLEU: 21.3, chr-F: 0.377\ntestset: URL, BLEU: 31.6, chr-F: 0.502\ntestset: URL, BLEU: 24.9, chr-F: 0.420\ntestset: URL, BLEU: 5.2, chr-F: 0.231\ntestset: URL, BLEU: 20.1, chr-F: 0.374\ntestset: URL, BLEU: 4.6, chr-F: 0.191\ntestset: URL, BLEU: 4.8, chr-F: 0.122\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 9.0, chr-F: 0.246\ntestset: URL, BLEU: 14.0, chr-F: 0.212\ntestset: URL, BLEU: 38.2, chr-F: 0.558\ntestset: URL, BLEU: 21.2, chr-F: 0.364\ntestset: URL, BLEU: 42.3, chr-F: 0.589", "### System Info:\n\n\n* hf\\_name: nic-eng\n* source\\_languages: nic\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic', 'en']\n* src\\_constituents: {'bam\\_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\\_Latn', 'umb'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nic\n* tgt\\_alpha3: eng\n* short\\_pair: nic-en\n* chrF2\\_score: 0.377\n* bleu: 21.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 15228.0\n* src\\_name: Niger-Kordofanian languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: nic\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: nic-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 85, 604, 560 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #sn #rw #wo #ig #sg #ee #zu #lg #ts #ln #ny #yo #rn #xh #nic #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nic-eng\n\n\n* source group: Niger-Kordofanian languages\n* target group: English\n* OPUS readme: nic-eng\n* model: transformer\n* source language(s): bam\\_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi\\_Latn tso umb wol xho yor zul\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 2.4, chr-F: 0.090\ntestset: URL, BLEU: 10.3, chr-F: 0.384\ntestset: URL, BLEU: 1.2, chr-F: 0.114\ntestset: URL, BLEU: 7.5, chr-F: 0.197\ntestset: URL, BLEU: 30.7, chr-F: 0.481\ntestset: URL, BLEU: 3.1, chr-F: 0.185\ntestset: URL, BLEU: 3.1, chr-F: 0.261\ntestset: URL, BLEU: 21.3, chr-F: 0.377\ntestset: URL, BLEU: 31.6, chr-F: 0.502\ntestset: URL, BLEU: 24.9, chr-F: 0.420\ntestset: URL, BLEU: 5.2, chr-F: 0.231\ntestset: URL, BLEU: 20.1, chr-F: 0.374\ntestset: URL, BLEU: 4.6, chr-F: 0.191\ntestset: URL, BLEU: 4.8, chr-F: 0.122\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 9.0, chr-F: 0.246\ntestset: URL, BLEU: 14.0, chr-F: 0.212\ntestset: URL, BLEU: 38.2, chr-F: 0.558\ntestset: URL, BLEU: 21.2, chr-F: 0.364\ntestset: URL, BLEU: 42.3, chr-F: 0.589### System Info:\n\n\n* hf\\_name: nic-eng\n* source\\_languages: nic\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic', 'en']\n* src\\_constituents: {'bam\\_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi\\_Latn', 'umb'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nic\n* tgt\\_alpha3: eng\n* short\\_pair: nic-en\n* chrF2\\_score: 0.377\n* bleu: 21.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 15228.0\n* src\\_name: Niger-Kordofanian languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: nic\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: nic-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-niu-de * source languages: niu * target languages: de * OPUS readme: [niu-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-de/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-de/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-de/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.niu.de | 20.2 | 0.395 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-niu-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "niu", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #niu #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-niu-de * source languages: niu * target languages: de * OPUS readme: niu-de * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.2, chr-F: 0.395
[ "### opus-mt-niu-de\n\n\n* source languages: niu\n* target languages: de\n* OPUS readme: niu-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.395" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #niu #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-niu-de\n\n\n* source languages: niu\n* target languages: de\n* OPUS readme: niu-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.395" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #niu #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-niu-de\n\n\n* source languages: niu\n* target languages: de\n* OPUS readme: niu-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.395" ]
translation
transformers
### opus-mt-niu-en * source languages: niu * target languages: en * OPUS readme: [niu-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-en/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-en/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-en/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.niu.en | 46.1 | 0.604 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-niu-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "niu", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #niu #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-niu-en * source languages: niu * target languages: en * OPUS readme: niu-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 46.1, chr-F: 0.604
[ "### opus-mt-niu-en\n\n\n* source languages: niu\n* target languages: en\n* OPUS readme: niu-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.1, chr-F: 0.604" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #niu #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-niu-en\n\n\n* source languages: niu\n* target languages: en\n* OPUS readme: niu-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.1, chr-F: 0.604" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #niu #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-niu-en\n\n\n* source languages: niu\n* target languages: en\n* OPUS readme: niu-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.1, chr-F: 0.604" ]
translation
transformers
### opus-mt-niu-es * source languages: niu * target languages: es * OPUS readme: [niu-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.niu.es | 24.2 | 0.419 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-niu-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "niu", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #niu #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-niu-es * source languages: niu * target languages: es * OPUS readme: niu-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.2, chr-F: 0.419
[ "### opus-mt-niu-es\n\n\n* source languages: niu\n* target languages: es\n* OPUS readme: niu-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.2, chr-F: 0.419" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #niu #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-niu-es\n\n\n* source languages: niu\n* target languages: es\n* OPUS readme: niu-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.2, chr-F: 0.419" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #niu #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-niu-es\n\n\n* source languages: niu\n* target languages: es\n* OPUS readme: niu-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.2, chr-F: 0.419" ]
translation
transformers
### opus-mt-niu-fi * source languages: niu * target languages: fi * OPUS readme: [niu-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.niu.fi | 24.8 | 0.474 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-niu-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "niu", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #niu #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-niu-fi * source languages: niu * target languages: fi * OPUS readme: niu-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.8, chr-F: 0.474
[ "### opus-mt-niu-fi\n\n\n* source languages: niu\n* target languages: fi\n* OPUS readme: niu-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.8, chr-F: 0.474" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #niu #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-niu-fi\n\n\n* source languages: niu\n* target languages: fi\n* OPUS readme: niu-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.8, chr-F: 0.474" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #niu #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-niu-fi\n\n\n* source languages: niu\n* target languages: fi\n* OPUS readme: niu-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.8, chr-F: 0.474" ]
translation
transformers
### opus-mt-niu-fr * source languages: niu * target languages: fr * OPUS readme: [niu-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.niu.fr | 28.1 | 0.452 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-niu-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "niu", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #niu #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-niu-fr * source languages: niu * target languages: fr * OPUS readme: niu-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 28.1, chr-F: 0.452
[ "### opus-mt-niu-fr\n\n\n* source languages: niu\n* target languages: fr\n* OPUS readme: niu-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.1, chr-F: 0.452" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #niu #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-niu-fr\n\n\n* source languages: niu\n* target languages: fr\n* OPUS readme: niu-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.1, chr-F: 0.452" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #niu #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-niu-fr\n\n\n* source languages: niu\n* target languages: fr\n* OPUS readme: niu-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.1, chr-F: 0.452" ]
translation
transformers
### opus-mt-niu-sv * source languages: niu * target languages: sv * OPUS readme: [niu-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.niu.sv | 29.2 | 0.478 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-niu-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "niu", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #niu #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-niu-sv * source languages: niu * target languages: sv * OPUS readme: niu-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 29.2, chr-F: 0.478
[ "### opus-mt-niu-sv\n\n\n* source languages: niu\n* target languages: sv\n* OPUS readme: niu-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.2, chr-F: 0.478" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #niu #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-niu-sv\n\n\n* source languages: niu\n* target languages: sv\n* OPUS readme: niu-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.2, chr-F: 0.478" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #niu #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-niu-sv\n\n\n* source languages: niu\n* target languages: sv\n* OPUS readme: niu-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.2, chr-F: 0.478" ]
translation
transformers
### nld-afr * source group: Dutch * target group: Afrikaans * OPUS readme: [nld-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-afr/README.md) * model: transformer-align * source language(s): nld * target language(s): afr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nld.afr | 57.8 | 0.749 | ### System Info: - hf_name: nld-afr - source_languages: nld - target_languages: afr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-afr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['nl', 'af'] - src_constituents: {'nld'} - tgt_constituents: {'afr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.test.txt - src_alpha3: nld - tgt_alpha3: afr - short_pair: nl-af - chrF2_score: 0.7490000000000001 - bleu: 57.8 - brevity_penalty: 1.0 - ref_len: 6823.0 - src_name: Dutch - tgt_name: Afrikaans - train_date: 2020-06-17 - src_alpha2: nl - tgt_alpha2: af - prefer_old: False - long_pair: nld-afr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["nl", "af"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nl-af
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nl", "af", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "nl", "af" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nl #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nld-afr * source group: Dutch * target group: Afrikaans * OPUS readme: nld-afr * model: transformer-align * source language(s): nld * target language(s): afr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 57.8, chr-F: 0.749 ### System Info: * hf\_name: nld-afr * source\_languages: nld * target\_languages: afr * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['nl', 'af'] * src\_constituents: {'nld'} * tgt\_constituents: {'afr'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nld * tgt\_alpha3: afr * short\_pair: nl-af * chrF2\_score: 0.7490000000000001 * bleu: 57.8 * brevity\_penalty: 1.0 * ref\_len: 6823.0 * src\_name: Dutch * tgt\_name: Afrikaans * train\_date: 2020-06-17 * src\_alpha2: nl * tgt\_alpha2: af * prefer\_old: False * long\_pair: nld-afr * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nld-afr\n\n\n* source group: Dutch\n* target group: Afrikaans\n* OPUS readme: nld-afr\n* model: transformer-align\n* source language(s): nld\n* target language(s): afr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.8, chr-F: 0.749", "### System Info:\n\n\n* hf\\_name: nld-afr\n* source\\_languages: nld\n* target\\_languages: afr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'af']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'afr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: afr\n* short\\_pair: nl-af\n* chrF2\\_score: 0.7490000000000001\n* bleu: 57.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 6823.0\n* src\\_name: Dutch\n* tgt\\_name: Afrikaans\n* train\\_date: 2020-06-17\n* src\\_alpha2: nl\n* tgt\\_alpha2: af\n* prefer\\_old: False\n* long\\_pair: nld-afr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nld-afr\n\n\n* source group: Dutch\n* target group: Afrikaans\n* OPUS readme: nld-afr\n* model: transformer-align\n* source language(s): nld\n* target language(s): afr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.8, chr-F: 0.749", "### System Info:\n\n\n* hf\\_name: nld-afr\n* source\\_languages: nld\n* target\\_languages: afr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'af']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'afr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: afr\n* short\\_pair: nl-af\n* chrF2\\_score: 0.7490000000000001\n* bleu: 57.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 6823.0\n* src\\_name: Dutch\n* tgt\\_name: Afrikaans\n* train\\_date: 2020-06-17\n* src\\_alpha2: nl\n* tgt\\_alpha2: af\n* prefer\\_old: False\n* long\\_pair: nld-afr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 406 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #af #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nld-afr\n\n\n* source group: Dutch\n* target group: Afrikaans\n* OPUS readme: nld-afr\n* model: transformer-align\n* source language(s): nld\n* target language(s): afr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 57.8, chr-F: 0.749### System Info:\n\n\n* hf\\_name: nld-afr\n* source\\_languages: nld\n* target\\_languages: afr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'af']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'afr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: afr\n* short\\_pair: nl-af\n* chrF2\\_score: 0.7490000000000001\n* bleu: 57.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 6823.0\n* src\\_name: Dutch\n* tgt\\_name: Afrikaans\n* train\\_date: 2020-06-17\n* src\\_alpha2: nl\n* tgt\\_alpha2: af\n* prefer\\_old: False\n* long\\_pair: nld-afr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### nld-cat * source group: Dutch * target group: Catalan * OPUS readme: [nld-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-cat/README.md) * model: transformer-align * source language(s): nld * target language(s): cat * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nld.cat | 42.1 | 0.624 | ### System Info: - hf_name: nld-cat - source_languages: nld - target_languages: cat - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-cat/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['nl', 'ca'] - src_constituents: {'nld'} - tgt_constituents: {'cat'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.test.txt - src_alpha3: nld - tgt_alpha3: cat - short_pair: nl-ca - chrF2_score: 0.624 - bleu: 42.1 - brevity_penalty: 0.988 - ref_len: 3942.0 - src_name: Dutch - tgt_name: Catalan - train_date: 2020-06-16 - src_alpha2: nl - tgt_alpha2: ca - prefer_old: False - long_pair: nld-cat - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["nl", "ca"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nl-ca
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nl", "ca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "nl", "ca" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nl #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nld-cat * source group: Dutch * target group: Catalan * OPUS readme: nld-cat * model: transformer-align * source language(s): nld * target language(s): cat * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 42.1, chr-F: 0.624 ### System Info: * hf\_name: nld-cat * source\_languages: nld * target\_languages: cat * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['nl', 'ca'] * src\_constituents: {'nld'} * tgt\_constituents: {'cat'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm12k,spm12k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nld * tgt\_alpha3: cat * short\_pair: nl-ca * chrF2\_score: 0.624 * bleu: 42.1 * brevity\_penalty: 0.988 * ref\_len: 3942.0 * src\_name: Dutch * tgt\_name: Catalan * train\_date: 2020-06-16 * src\_alpha2: nl * tgt\_alpha2: ca * prefer\_old: False * long\_pair: nld-cat * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nld-cat\n\n\n* source group: Dutch\n* target group: Catalan\n* OPUS readme: nld-cat\n* model: transformer-align\n* source language(s): nld\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.1, chr-F: 0.624", "### System Info:\n\n\n* hf\\_name: nld-cat\n* source\\_languages: nld\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'ca']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: cat\n* short\\_pair: nl-ca\n* chrF2\\_score: 0.624\n* bleu: 42.1\n* brevity\\_penalty: 0.988\n* ref\\_len: 3942.0\n* src\\_name: Dutch\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-16\n* src\\_alpha2: nl\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: nld-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nld-cat\n\n\n* source group: Dutch\n* target group: Catalan\n* OPUS readme: nld-cat\n* model: transformer-align\n* source language(s): nld\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.1, chr-F: 0.624", "### System Info:\n\n\n* hf\\_name: nld-cat\n* source\\_languages: nld\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'ca']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: cat\n* short\\_pair: nl-ca\n* chrF2\\_score: 0.624\n* bleu: 42.1\n* brevity\\_penalty: 0.988\n* ref\\_len: 3942.0\n* src\\_name: Dutch\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-16\n* src\\_alpha2: nl\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: nld-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 396 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nld-cat\n\n\n* source group: Dutch\n* target group: Catalan\n* OPUS readme: nld-cat\n* model: transformer-align\n* source language(s): nld\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.1, chr-F: 0.624### System Info:\n\n\n* hf\\_name: nld-cat\n* source\\_languages: nld\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'ca']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: cat\n* short\\_pair: nl-ca\n* chrF2\\_score: 0.624\n* bleu: 42.1\n* brevity\\_penalty: 0.988\n* ref\\_len: 3942.0\n* src\\_name: Dutch\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-16\n* src\\_alpha2: nl\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: nld-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-nl-en * source languages: nl * target languages: en * OPUS readme: [nl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-05.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.zip) * test set translations: [opus-2019-12-05.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.test.txt) * test set scores: [opus-2019-12-05.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.nl.en | 60.9 | 0.749 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nl-en
null
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "nl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #rust #marian #text2text-generation #translation #nl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-nl-en * source languages: nl * target languages: en * OPUS readme: nl-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 60.9, chr-F: 0.749
[ "### opus-mt-nl-en\n\n\n* source languages: nl\n* target languages: en\n* OPUS readme: nl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.9, chr-F: 0.749" ]
[ "TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #nl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-nl-en\n\n\n* source languages: nl\n* target languages: en\n* OPUS readme: nl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.9, chr-F: 0.749" ]
[ 53, 106 ]
[ "TAGS\n#transformers #pytorch #tf #rust #marian #text2text-generation #translation #nl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-nl-en\n\n\n* source languages: nl\n* target languages: en\n* OPUS readme: nl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 60.9, chr-F: 0.749" ]
translation
transformers
### nld-epo * source group: Dutch * target group: Esperanto * OPUS readme: [nld-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-epo/README.md) * model: transformer-align * source language(s): nld * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nld.epo | 16.1 | 0.355 | ### System Info: - hf_name: nld-epo - source_languages: nld - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['nl', 'eo'] - src_constituents: {'nld'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.test.txt - src_alpha3: nld - tgt_alpha3: epo - short_pair: nl-eo - chrF2_score: 0.355 - bleu: 16.1 - brevity_penalty: 0.9359999999999999 - ref_len: 72293.0 - src_name: Dutch - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: nl - tgt_alpha2: eo - prefer_old: False - long_pair: nld-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["nl", "eo"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nl-eo
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nl", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "nl", "eo" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nl #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nld-epo * source group: Dutch * target group: Esperanto * OPUS readme: nld-epo * model: transformer-align * source language(s): nld * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 16.1, chr-F: 0.355 ### System Info: * hf\_name: nld-epo * source\_languages: nld * target\_languages: epo * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['nl', 'eo'] * src\_constituents: {'nld'} * tgt\_constituents: {'epo'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nld * tgt\_alpha3: epo * short\_pair: nl-eo * chrF2\_score: 0.355 * bleu: 16.1 * brevity\_penalty: 0.9359999999999999 * ref\_len: 72293.0 * src\_name: Dutch * tgt\_name: Esperanto * train\_date: 2020-06-16 * src\_alpha2: nl * tgt\_alpha2: eo * prefer\_old: False * long\_pair: nld-epo * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nld-epo\n\n\n* source group: Dutch\n* target group: Esperanto\n* OPUS readme: nld-epo\n* model: transformer-align\n* source language(s): nld\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.1, chr-F: 0.355", "### System Info:\n\n\n* hf\\_name: nld-epo\n* source\\_languages: nld\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'eo']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: epo\n* short\\_pair: nl-eo\n* chrF2\\_score: 0.355\n* bleu: 16.1\n* brevity\\_penalty: 0.9359999999999999\n* ref\\_len: 72293.0\n* src\\_name: Dutch\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: nl\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: nld-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nld-epo\n\n\n* source group: Dutch\n* target group: Esperanto\n* OPUS readme: nld-epo\n* model: transformer-align\n* source language(s): nld\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.1, chr-F: 0.355", "### System Info:\n\n\n* hf\\_name: nld-epo\n* source\\_languages: nld\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'eo']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: epo\n* short\\_pair: nl-eo\n* chrF2\\_score: 0.355\n* bleu: 16.1\n* brevity\\_penalty: 0.9359999999999999\n* ref\\_len: 72293.0\n* src\\_name: Dutch\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: nl\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: nld-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 52, 138, 418 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nld-epo\n\n\n* source group: Dutch\n* target group: Esperanto\n* OPUS readme: nld-epo\n* model: transformer-align\n* source language(s): nld\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.1, chr-F: 0.355### System Info:\n\n\n* hf\\_name: nld-epo\n* source\\_languages: nld\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'eo']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: epo\n* short\\_pair: nl-eo\n* chrF2\\_score: 0.355\n* bleu: 16.1\n* brevity\\_penalty: 0.9359999999999999\n* ref\\_len: 72293.0\n* src\\_name: Dutch\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: nl\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: nld-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-nl-es * source languages: nl * target languages: es * OPUS readme: [nl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.nl.es | 51.6 | 0.698 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nl-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nl", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-nl-es * source languages: nl * target languages: es * OPUS readme: nl-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 51.6, chr-F: 0.698
[ "### opus-mt-nl-es\n\n\n* source languages: nl\n* target languages: es\n* OPUS readme: nl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.6, chr-F: 0.698" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-nl-es\n\n\n* source languages: nl\n* target languages: es\n* OPUS readme: nl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.6, chr-F: 0.698" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-nl-es\n\n\n* source languages: nl\n* target languages: es\n* OPUS readme: nl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.6, chr-F: 0.698" ]
translation
transformers
### opus-mt-nl-fi * source languages: nl * target languages: fi * OPUS readme: [nl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.nl.fi | 28.6 | 0.569 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nl-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nl", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nl #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-nl-fi * source languages: nl * target languages: fi * OPUS readme: nl-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 28.6, chr-F: 0.569
[ "### opus-mt-nl-fi\n\n\n* source languages: nl\n* target languages: fi\n* OPUS readme: nl-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.569" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-nl-fi\n\n\n* source languages: nl\n* target languages: fi\n* OPUS readme: nl-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.569" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-nl-fi\n\n\n* source languages: nl\n* target languages: fi\n* OPUS readme: nl-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.569" ]
translation
transformers
### opus-mt-nl-fr * source languages: nl * target languages: fr * OPUS readme: [nl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.nl.fr | 51.3 | 0.674 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nl-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nl", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-nl-fr * source languages: nl * target languages: fr * OPUS readme: nl-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 51.3, chr-F: 0.674
[ "### opus-mt-nl-fr\n\n\n* source languages: nl\n* target languages: fr\n* OPUS readme: nl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.3, chr-F: 0.674" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-nl-fr\n\n\n* source languages: nl\n* target languages: fr\n* OPUS readme: nl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.3, chr-F: 0.674" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-nl-fr\n\n\n* source languages: nl\n* target languages: fr\n* OPUS readme: nl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.3, chr-F: 0.674" ]
translation
transformers
### nld-nor * source group: Dutch * target group: Norwegian * OPUS readme: [nld-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-nor/README.md) * model: transformer-align * source language(s): nld * target language(s): nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nld.nor | 36.1 | 0.562 | ### System Info: - hf_name: nld-nor - source_languages: nld - target_languages: nor - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-nor/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['nl', 'no'] - src_constituents: {'nld'} - tgt_constituents: {'nob', 'nno'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.test.txt - src_alpha3: nld - tgt_alpha3: nor - short_pair: nl-no - chrF2_score: 0.562 - bleu: 36.1 - brevity_penalty: 0.966 - ref_len: 1459.0 - src_name: Dutch - tgt_name: Norwegian - train_date: 2020-06-17 - src_alpha2: nl - tgt_alpha2: no - prefer_old: False - long_pair: nld-nor - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["nl", false], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nl-no
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nl", "no", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "nl", "no" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nl #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nld-nor * source group: Dutch * target group: Norwegian * OPUS readme: nld-nor * model: transformer-align * source language(s): nld * target language(s): nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 36.1, chr-F: 0.562 ### System Info: * hf\_name: nld-nor * source\_languages: nld * target\_languages: nor * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['nl', 'no'] * src\_constituents: {'nld'} * tgt\_constituents: {'nob', 'nno'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nld * tgt\_alpha3: nor * short\_pair: nl-no * chrF2\_score: 0.562 * bleu: 36.1 * brevity\_penalty: 0.966 * ref\_len: 1459.0 * src\_name: Dutch * tgt\_name: Norwegian * train\_date: 2020-06-17 * src\_alpha2: nl * tgt\_alpha2: no * prefer\_old: False * long\_pair: nld-nor * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nld-nor\n\n\n* source group: Dutch\n* target group: Norwegian\n* OPUS readme: nld-nor\n* model: transformer-align\n* source language(s): nld\n* target language(s): nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.562", "### System Info:\n\n\n* hf\\_name: nld-nor\n* source\\_languages: nld\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'no']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: nor\n* short\\_pair: nl-no\n* chrF2\\_score: 0.562\n* bleu: 36.1\n* brevity\\_penalty: 0.966\n* ref\\_len: 1459.0\n* src\\_name: Dutch\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: nl\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: nld-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nld-nor\n\n\n* source group: Dutch\n* target group: Norwegian\n* OPUS readme: nld-nor\n* model: transformer-align\n* source language(s): nld\n* target language(s): nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.562", "### System Info:\n\n\n* hf\\_name: nld-nor\n* source\\_languages: nld\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'no']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: nor\n* short\\_pair: nl-no\n* chrF2\\_score: 0.562\n* bleu: 36.1\n* brevity\\_penalty: 0.966\n* ref\\_len: 1459.0\n* src\\_name: Dutch\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: nl\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: nld-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 135, 402 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nld-nor\n\n\n* source group: Dutch\n* target group: Norwegian\n* OPUS readme: nld-nor\n* model: transformer-align\n* source language(s): nld\n* target language(s): nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.1, chr-F: 0.562### System Info:\n\n\n* hf\\_name: nld-nor\n* source\\_languages: nld\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'no']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: nor\n* short\\_pair: nl-no\n* chrF2\\_score: 0.562\n* bleu: 36.1\n* brevity\\_penalty: 0.966\n* ref\\_len: 1459.0\n* src\\_name: Dutch\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: nl\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: nld-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-nl-sv * source languages: nl * target languages: sv * OPUS readme: [nl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | GlobalVoices.nl.sv | 25.0 | 0.518 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nl-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nl", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-nl-sv * source languages: nl * target languages: sv * OPUS readme: nl-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.0, chr-F: 0.518
[ "### opus-mt-nl-sv\n\n\n* source languages: nl\n* target languages: sv\n* OPUS readme: nl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.518" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-nl-sv\n\n\n* source languages: nl\n* target languages: sv\n* OPUS readme: nl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.518" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-nl-sv\n\n\n* source languages: nl\n* target languages: sv\n* OPUS readme: nl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.518" ]
translation
transformers
### nld-ukr * source group: Dutch * target group: Ukrainian * OPUS readme: [nld-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-ukr/README.md) * model: transformer-align * source language(s): nld * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nld.ukr | 40.8 | 0.619 | ### System Info: - hf_name: nld-ukr - source_languages: nld - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['nl', 'uk'] - src_constituents: {'nld'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.test.txt - src_alpha3: nld - tgt_alpha3: ukr - short_pair: nl-uk - chrF2_score: 0.619 - bleu: 40.8 - brevity_penalty: 0.992 - ref_len: 51674.0 - src_name: Dutch - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: nl - tgt_alpha2: uk - prefer_old: False - long_pair: nld-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["nl", "uk"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nl-uk
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nl", "uk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "nl", "uk" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nl #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nld-ukr * source group: Dutch * target group: Ukrainian * OPUS readme: nld-ukr * model: transformer-align * source language(s): nld * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 40.8, chr-F: 0.619 ### System Info: * hf\_name: nld-ukr * source\_languages: nld * target\_languages: ukr * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['nl', 'uk'] * src\_constituents: {'nld'} * tgt\_constituents: {'ukr'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nld * tgt\_alpha3: ukr * short\_pair: nl-uk * chrF2\_score: 0.619 * bleu: 40.8 * brevity\_penalty: 0.992 * ref\_len: 51674.0 * src\_name: Dutch * tgt\_name: Ukrainian * train\_date: 2020-06-17 * src\_alpha2: nl * tgt\_alpha2: uk * prefer\_old: False * long\_pair: nld-ukr * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nld-ukr\n\n\n* source group: Dutch\n* target group: Ukrainian\n* OPUS readme: nld-ukr\n* model: transformer-align\n* source language(s): nld\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.8, chr-F: 0.619", "### System Info:\n\n\n* hf\\_name: nld-ukr\n* source\\_languages: nld\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'uk']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: ukr\n* short\\_pair: nl-uk\n* chrF2\\_score: 0.619\n* bleu: 40.8\n* brevity\\_penalty: 0.992\n* ref\\_len: 51674.0\n* src\\_name: Dutch\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: nl\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: nld-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nld-ukr\n\n\n* source group: Dutch\n* target group: Ukrainian\n* OPUS readme: nld-ukr\n* model: transformer-align\n* source language(s): nld\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.8, chr-F: 0.619", "### System Info:\n\n\n* hf\\_name: nld-ukr\n* source\\_languages: nld\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'uk']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: ukr\n* short\\_pair: nl-uk\n* chrF2\\_score: 0.619\n* bleu: 40.8\n* brevity\\_penalty: 0.992\n* ref\\_len: 51674.0\n* src\\_name: Dutch\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: nl\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: nld-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 403 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nl #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nld-ukr\n\n\n* source group: Dutch\n* target group: Ukrainian\n* OPUS readme: nld-ukr\n* model: transformer-align\n* source language(s): nld\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.8, chr-F: 0.619### System Info:\n\n\n* hf\\_name: nld-ukr\n* source\\_languages: nld\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['nl', 'uk']\n* src\\_constituents: {'nld'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nld\n* tgt\\_alpha3: ukr\n* short\\_pair: nl-uk\n* chrF2\\_score: 0.619\n* bleu: 40.8\n* brevity\\_penalty: 0.992\n* ref\\_len: 51674.0\n* src\\_name: Dutch\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: nl\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: nld-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### nor-dan * source group: Norwegian * target group: Danish * OPUS readme: [nor-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-dan/README.md) * model: transformer-align * source language(s): nno nob * target language(s): dan * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nor.dan | 65.0 | 0.792 | ### System Info: - hf_name: nor-dan - source_languages: nor - target_languages: dan - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-dan/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['no', 'da'] - src_constituents: {'nob', 'nno'} - tgt_constituents: {'dan'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.test.txt - src_alpha3: nor - tgt_alpha3: dan - short_pair: no-da - chrF2_score: 0.792 - bleu: 65.0 - brevity_penalty: 0.995 - ref_len: 9865.0 - src_name: Norwegian - tgt_name: Danish - train_date: 2020-06-17 - src_alpha2: no - tgt_alpha2: da - prefer_old: False - long_pair: nor-dan - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": [false, "da"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-no-da
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "no", "da", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "no", "da" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #no #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nor-dan * source group: Norwegian * target group: Danish * OPUS readme: nor-dan * model: transformer-align * source language(s): nno nob * target language(s): dan * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 65.0, chr-F: 0.792 ### System Info: * hf\_name: nor-dan * source\_languages: nor * target\_languages: dan * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['no', 'da'] * src\_constituents: {'nob', 'nno'} * tgt\_constituents: {'dan'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm12k,spm12k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nor * tgt\_alpha3: dan * short\_pair: no-da * chrF2\_score: 0.792 * bleu: 65.0 * brevity\_penalty: 0.995 * ref\_len: 9865.0 * src\_name: Norwegian * tgt\_name: Danish * train\_date: 2020-06-17 * src\_alpha2: no * tgt\_alpha2: da * prefer\_old: False * long\_pair: nor-dan * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nor-dan\n\n\n* source group: Norwegian\n* target group: Danish\n* OPUS readme: nor-dan\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): dan\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 65.0, chr-F: 0.792", "### System Info:\n\n\n* hf\\_name: nor-dan\n* source\\_languages: nor\n* target\\_languages: dan\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'da']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'dan'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: dan\n* short\\_pair: no-da\n* chrF2\\_score: 0.792\n* bleu: 65.0\n* brevity\\_penalty: 0.995\n* ref\\_len: 9865.0\n* src\\_name: Norwegian\n* tgt\\_name: Danish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: da\n* prefer\\_old: False\n* long\\_pair: nor-dan\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nor-dan\n\n\n* source group: Norwegian\n* target group: Danish\n* OPUS readme: nor-dan\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): dan\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 65.0, chr-F: 0.792", "### System Info:\n\n\n* hf\\_name: nor-dan\n* source\\_languages: nor\n* target\\_languages: dan\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'da']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'dan'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: dan\n* short\\_pair: no-da\n* chrF2\\_score: 0.792\n* bleu: 65.0\n* brevity\\_penalty: 0.995\n* ref\\_len: 9865.0\n* src\\_name: Norwegian\n* tgt\\_name: Danish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: da\n* prefer\\_old: False\n* long\\_pair: nor-dan\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 397 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nor-dan\n\n\n* source group: Norwegian\n* target group: Danish\n* OPUS readme: nor-dan\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): dan\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 65.0, chr-F: 0.792### System Info:\n\n\n* hf\\_name: nor-dan\n* source\\_languages: nor\n* target\\_languages: dan\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'da']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'dan'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: dan\n* short\\_pair: no-da\n* chrF2\\_score: 0.792\n* bleu: 65.0\n* brevity\\_penalty: 0.995\n* ref\\_len: 9865.0\n* src\\_name: Norwegian\n* tgt\\_name: Danish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: da\n* prefer\\_old: False\n* long\\_pair: nor-dan\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### nor-deu * source group: Norwegian * target group: German * OPUS readme: [nor-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-deu/README.md) * model: transformer-align * source language(s): nno nob * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nor.deu | 29.6 | 0.541 | ### System Info: - hf_name: nor-deu - source_languages: nor - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['no', 'de'] - src_constituents: {'nob', 'nno'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.test.txt - src_alpha3: nor - tgt_alpha3: deu - short_pair: no-de - chrF2_score: 0.541 - bleu: 29.6 - brevity_penalty: 0.96 - ref_len: 34575.0 - src_name: Norwegian - tgt_name: German - train_date: 2020-06-17 - src_alpha2: no - tgt_alpha2: de - prefer_old: False - long_pair: nor-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": [false, "de"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-no-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "no", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "no", "de" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #no #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nor-deu * source group: Norwegian * target group: German * OPUS readme: nor-deu * model: transformer-align * source language(s): nno nob * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 29.6, chr-F: 0.541 ### System Info: * hf\_name: nor-deu * source\_languages: nor * target\_languages: deu * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['no', 'de'] * src\_constituents: {'nob', 'nno'} * tgt\_constituents: {'deu'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nor * tgt\_alpha3: deu * short\_pair: no-de * chrF2\_score: 0.541 * bleu: 29.6 * brevity\_penalty: 0.96 * ref\_len: 34575.0 * src\_name: Norwegian * tgt\_name: German * train\_date: 2020-06-17 * src\_alpha2: no * tgt\_alpha2: de * prefer\_old: False * long\_pair: nor-deu * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nor-deu\n\n\n* source group: Norwegian\n* target group: German\n* OPUS readme: nor-deu\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.6, chr-F: 0.541", "### System Info:\n\n\n* hf\\_name: nor-deu\n* source\\_languages: nor\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'de']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: deu\n* short\\_pair: no-de\n* chrF2\\_score: 0.541\n* bleu: 29.6\n* brevity\\_penalty: 0.96\n* ref\\_len: 34575.0\n* src\\_name: Norwegian\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: nor-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nor-deu\n\n\n* source group: Norwegian\n* target group: German\n* OPUS readme: nor-deu\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.6, chr-F: 0.541", "### System Info:\n\n\n* hf\\_name: nor-deu\n* source\\_languages: nor\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'de']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: deu\n* short\\_pair: no-de\n* chrF2\\_score: 0.541\n* bleu: 29.6\n* brevity\\_penalty: 0.96\n* ref\\_len: 34575.0\n* src\\_name: Norwegian\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: nor-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 401 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nor-deu\n\n\n* source group: Norwegian\n* target group: German\n* OPUS readme: nor-deu\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.6, chr-F: 0.541### System Info:\n\n\n* hf\\_name: nor-deu\n* source\\_languages: nor\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'de']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: deu\n* short\\_pair: no-de\n* chrF2\\_score: 0.541\n* bleu: 29.6\n* brevity\\_penalty: 0.96\n* ref\\_len: 34575.0\n* src\\_name: Norwegian\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: nor-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### nor-spa * source group: Norwegian * target group: Spanish * OPUS readme: [nor-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-spa/README.md) * model: transformer-align * source language(s): nno nob * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nor.spa | 34.2 | 0.565 | ### System Info: - hf_name: nor-spa - source_languages: nor - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['no', 'es'] - src_constituents: {'nob', 'nno'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.test.txt - src_alpha3: nor - tgt_alpha3: spa - short_pair: no-es - chrF2_score: 0.565 - bleu: 34.2 - brevity_penalty: 0.997 - ref_len: 7311.0 - src_name: Norwegian - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: no - tgt_alpha2: es - prefer_old: False - long_pair: nor-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": [false, "es"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-no-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "no", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "no", "es" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #no #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nor-spa * source group: Norwegian * target group: Spanish * OPUS readme: nor-spa * model: transformer-align * source language(s): nno nob * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 34.2, chr-F: 0.565 ### System Info: * hf\_name: nor-spa * source\_languages: nor * target\_languages: spa * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['no', 'es'] * src\_constituents: {'nob', 'nno'} * tgt\_constituents: {'spa'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm12k,spm12k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nor * tgt\_alpha3: spa * short\_pair: no-es * chrF2\_score: 0.565 * bleu: 34.2 * brevity\_penalty: 0.997 * ref\_len: 7311.0 * src\_name: Norwegian * tgt\_name: Spanish * train\_date: 2020-06-17 * src\_alpha2: no * tgt\_alpha2: es * prefer\_old: False * long\_pair: nor-spa * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nor-spa\n\n\n* source group: Norwegian\n* target group: Spanish\n* OPUS readme: nor-spa\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.2, chr-F: 0.565", "### System Info:\n\n\n* hf\\_name: nor-spa\n* source\\_languages: nor\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'es']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: spa\n* short\\_pair: no-es\n* chrF2\\_score: 0.565\n* bleu: 34.2\n* brevity\\_penalty: 0.997\n* ref\\_len: 7311.0\n* src\\_name: Norwegian\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: nor-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nor-spa\n\n\n* source group: Norwegian\n* target group: Spanish\n* OPUS readme: nor-spa\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.2, chr-F: 0.565", "### System Info:\n\n\n* hf\\_name: nor-spa\n* source\\_languages: nor\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'es']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: spa\n* short\\_pair: no-es\n* chrF2\\_score: 0.565\n* bleu: 34.2\n* brevity\\_penalty: 0.997\n* ref\\_len: 7311.0\n* src\\_name: Norwegian\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: nor-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 397 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nor-spa\n\n\n* source group: Norwegian\n* target group: Spanish\n* OPUS readme: nor-spa\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.2, chr-F: 0.565### System Info:\n\n\n* hf\\_name: nor-spa\n* source\\_languages: nor\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'es']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: spa\n* short\\_pair: no-es\n* chrF2\\_score: 0.565\n* bleu: 34.2\n* brevity\\_penalty: 0.997\n* ref\\_len: 7311.0\n* src\\_name: Norwegian\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: nor-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### nor-fin * source group: Norwegian * target group: Finnish * OPUS readme: [nor-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fin/README.md) * model: transformer-align * source language(s): nno nob * target language(s): fin * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nor.fin | 14.1 | 0.374 | ### System Info: - hf_name: nor-fin - source_languages: nor - target_languages: fin - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fin/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['no', 'fi'] - src_constituents: {'nob', 'nno'} - tgt_constituents: {'fin'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.test.txt - src_alpha3: nor - tgt_alpha3: fin - short_pair: no-fi - chrF2_score: 0.374 - bleu: 14.1 - brevity_penalty: 0.894 - ref_len: 13066.0 - src_name: Norwegian - tgt_name: Finnish - train_date: 2020-06-17 - src_alpha2: no - tgt_alpha2: fi - prefer_old: False - long_pair: nor-fin - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": [false, "fi"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-no-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "no", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "no", "fi" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #no #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nor-fin * source group: Norwegian * target group: Finnish * OPUS readme: nor-fin * model: transformer-align * source language(s): nno nob * target language(s): fin * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 14.1, chr-F: 0.374 ### System Info: * hf\_name: nor-fin * source\_languages: nor * target\_languages: fin * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['no', 'fi'] * src\_constituents: {'nob', 'nno'} * tgt\_constituents: {'fin'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nor * tgt\_alpha3: fin * short\_pair: no-fi * chrF2\_score: 0.374 * bleu: 14.1 * brevity\_penalty: 0.894 * ref\_len: 13066.0 * src\_name: Norwegian * tgt\_name: Finnish * train\_date: 2020-06-17 * src\_alpha2: no * tgt\_alpha2: fi * prefer\_old: False * long\_pair: nor-fin * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nor-fin\n\n\n* source group: Norwegian\n* target group: Finnish\n* OPUS readme: nor-fin\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): fin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 14.1, chr-F: 0.374", "### System Info:\n\n\n* hf\\_name: nor-fin\n* source\\_languages: nor\n* target\\_languages: fin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'fi']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'fin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: fin\n* short\\_pair: no-fi\n* chrF2\\_score: 0.374\n* bleu: 14.1\n* brevity\\_penalty: 0.894\n* ref\\_len: 13066.0\n* src\\_name: Norwegian\n* tgt\\_name: Finnish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: fi\n* prefer\\_old: False\n* long\\_pair: nor-fin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nor-fin\n\n\n* source group: Norwegian\n* target group: Finnish\n* OPUS readme: nor-fin\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): fin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 14.1, chr-F: 0.374", "### System Info:\n\n\n* hf\\_name: nor-fin\n* source\\_languages: nor\n* target\\_languages: fin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'fi']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'fin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: fin\n* short\\_pair: no-fi\n* chrF2\\_score: 0.374\n* bleu: 14.1\n* brevity\\_penalty: 0.894\n* ref\\_len: 13066.0\n* src\\_name: Norwegian\n* tgt\\_name: Finnish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: fi\n* prefer\\_old: False\n* long\\_pair: nor-fin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 397 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nor-fin\n\n\n* source group: Norwegian\n* target group: Finnish\n* OPUS readme: nor-fin\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): fin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 14.1, chr-F: 0.374### System Info:\n\n\n* hf\\_name: nor-fin\n* source\\_languages: nor\n* target\\_languages: fin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'fi']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'fin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: fin\n* short\\_pair: no-fi\n* chrF2\\_score: 0.374\n* bleu: 14.1\n* brevity\\_penalty: 0.894\n* ref\\_len: 13066.0\n* src\\_name: Norwegian\n* tgt\\_name: Finnish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: fi\n* prefer\\_old: False\n* long\\_pair: nor-fin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### nor-fra * source group: Norwegian * target group: French * OPUS readme: [nor-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fra/README.md) * model: transformer-align * source language(s): nno nob * target language(s): fra * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nor.fra | 39.1 | 0.578 | ### System Info: - hf_name: nor-fra - source_languages: nor - target_languages: fra - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fra/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['no', 'fr'] - src_constituents: {'nob', 'nno'} - tgt_constituents: {'fra'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.test.txt - src_alpha3: nor - tgt_alpha3: fra - short_pair: no-fr - chrF2_score: 0.578 - bleu: 39.1 - brevity_penalty: 0.987 - ref_len: 3205.0 - src_name: Norwegian - tgt_name: French - train_date: 2020-06-17 - src_alpha2: no - tgt_alpha2: fr - prefer_old: False - long_pair: nor-fra - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": [false, "fr"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-no-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "no", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "no", "fr" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #no #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nor-fra * source group: Norwegian * target group: French * OPUS readme: nor-fra * model: transformer-align * source language(s): nno nob * target language(s): fra * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 39.1, chr-F: 0.578 ### System Info: * hf\_name: nor-fra * source\_languages: nor * target\_languages: fra * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['no', 'fr'] * src\_constituents: {'nob', 'nno'} * tgt\_constituents: {'fra'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nor * tgt\_alpha3: fra * short\_pair: no-fr * chrF2\_score: 0.578 * bleu: 39.1 * brevity\_penalty: 0.987 * ref\_len: 3205.0 * src\_name: Norwegian * tgt\_name: French * train\_date: 2020-06-17 * src\_alpha2: no * tgt\_alpha2: fr * prefer\_old: False * long\_pair: nor-fra * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nor-fra\n\n\n* source group: Norwegian\n* target group: French\n* OPUS readme: nor-fra\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.1, chr-F: 0.578", "### System Info:\n\n\n* hf\\_name: nor-fra\n* source\\_languages: nor\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'fr']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: fra\n* short\\_pair: no-fr\n* chrF2\\_score: 0.578\n* bleu: 39.1\n* brevity\\_penalty: 0.987\n* ref\\_len: 3205.0\n* src\\_name: Norwegian\n* tgt\\_name: French\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: nor-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nor-fra\n\n\n* source group: Norwegian\n* target group: French\n* OPUS readme: nor-fra\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.1, chr-F: 0.578", "### System Info:\n\n\n* hf\\_name: nor-fra\n* source\\_languages: nor\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'fr']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: fra\n* short\\_pair: no-fr\n* chrF2\\_score: 0.578\n* bleu: 39.1\n* brevity\\_penalty: 0.987\n* ref\\_len: 3205.0\n* src\\_name: Norwegian\n* tgt\\_name: French\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: nor-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 397 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nor-fra\n\n\n* source group: Norwegian\n* target group: French\n* OPUS readme: nor-fra\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.1, chr-F: 0.578### System Info:\n\n\n* hf\\_name: nor-fra\n* source\\_languages: nor\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'fr']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: fra\n* short\\_pair: no-fr\n* chrF2\\_score: 0.578\n* bleu: 39.1\n* brevity\\_penalty: 0.987\n* ref\\_len: 3205.0\n* src\\_name: Norwegian\n* tgt\\_name: French\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: nor-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### nor-nld * source group: Norwegian * target group: Dutch * OPUS readme: [nor-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nld/README.md) * model: transformer-align * source language(s): nob * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nor.nld | 40.2 | 0.596 | ### System Info: - hf_name: nor-nld - source_languages: nor - target_languages: nld - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nld/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['no', 'nl'] - src_constituents: {'nob', 'nno'} - tgt_constituents: {'nld'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.test.txt - src_alpha3: nor - tgt_alpha3: nld - short_pair: no-nl - chrF2_score: 0.596 - bleu: 40.2 - brevity_penalty: 0.9590000000000001 - ref_len: 1535.0 - src_name: Norwegian - tgt_name: Dutch - train_date: 2020-06-17 - src_alpha2: no - tgt_alpha2: nl - prefer_old: False - long_pair: nor-nld - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": [false, "nl"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-no-nl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "no", "nl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "no", "nl" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #no #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nor-nld * source group: Norwegian * target group: Dutch * OPUS readme: nor-nld * model: transformer-align * source language(s): nob * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 40.2, chr-F: 0.596 ### System Info: * hf\_name: nor-nld * source\_languages: nor * target\_languages: nld * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['no', 'nl'] * src\_constituents: {'nob', 'nno'} * tgt\_constituents: {'nld'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nor * tgt\_alpha3: nld * short\_pair: no-nl * chrF2\_score: 0.596 * bleu: 40.2 * brevity\_penalty: 0.9590000000000001 * ref\_len: 1535.0 * src\_name: Norwegian * tgt\_name: Dutch * train\_date: 2020-06-17 * src\_alpha2: no * tgt\_alpha2: nl * prefer\_old: False * long\_pair: nor-nld * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nor-nld\n\n\n* source group: Norwegian\n* target group: Dutch\n* OPUS readme: nor-nld\n* model: transformer-align\n* source language(s): nob\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.2, chr-F: 0.596", "### System Info:\n\n\n* hf\\_name: nor-nld\n* source\\_languages: nor\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'nl']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: nld\n* short\\_pair: no-nl\n* chrF2\\_score: 0.596\n* bleu: 40.2\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 1535.0\n* src\\_name: Norwegian\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: nor-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nor-nld\n\n\n* source group: Norwegian\n* target group: Dutch\n* OPUS readme: nor-nld\n* model: transformer-align\n* source language(s): nob\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.2, chr-F: 0.596", "### System Info:\n\n\n* hf\\_name: nor-nld\n* source\\_languages: nor\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'nl']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: nld\n* short\\_pair: no-nl\n* chrF2\\_score: 0.596\n* bleu: 40.2\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 1535.0\n* src\\_name: Norwegian\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: nor-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 135, 408 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nor-nld\n\n\n* source group: Norwegian\n* target group: Dutch\n* OPUS readme: nor-nld\n* model: transformer-align\n* source language(s): nob\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.2, chr-F: 0.596### System Info:\n\n\n* hf\\_name: nor-nld\n* source\\_languages: nor\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'nl']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: nld\n* short\\_pair: no-nl\n* chrF2\\_score: 0.596\n* bleu: 40.2\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 1535.0\n* src\\_name: Norwegian\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: nor-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### nor-nor * source group: Norwegian * target group: Norwegian * OPUS readme: [nor-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nor/README.md) * model: transformer-align * source language(s): nno nob * target language(s): nno nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nor.nor | 58.4 | 0.784 | ### System Info: - hf_name: nor-nor - source_languages: nor - target_languages: nor - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nor/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['no'] - src_constituents: {'nob', 'nno'} - tgt_constituents: {'nob', 'nno'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.test.txt - src_alpha3: nor - tgt_alpha3: nor - short_pair: no-no - chrF2_score: 0.784 - bleu: 58.4 - brevity_penalty: 0.988 - ref_len: 6351.0 - src_name: Norwegian - tgt_name: Norwegian - train_date: 2020-06-17 - src_alpha2: no - tgt_alpha2: no - prefer_old: False - long_pair: nor-nor - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": [false], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-no-no
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "no", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "no" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nor-nor * source group: Norwegian * target group: Norwegian * OPUS readme: nor-nor * model: transformer-align * source language(s): nno nob * target language(s): nno nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 58.4, chr-F: 0.784 ### System Info: * hf\_name: nor-nor * source\_languages: nor * target\_languages: nor * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['no'] * src\_constituents: {'nob', 'nno'} * tgt\_constituents: {'nob', 'nno'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nor * tgt\_alpha3: nor * short\_pair: no-no * chrF2\_score: 0.784 * bleu: 58.4 * brevity\_penalty: 0.988 * ref\_len: 6351.0 * src\_name: Norwegian * tgt\_name: Norwegian * train\_date: 2020-06-17 * src\_alpha2: no * tgt\_alpha2: no * prefer\_old: False * long\_pair: nor-nor * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nor-nor\n\n\n* source group: Norwegian\n* target group: Norwegian\n* OPUS readme: nor-nor\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): nno nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 58.4, chr-F: 0.784", "### System Info:\n\n\n* hf\\_name: nor-nor\n* source\\_languages: nor\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: nor\n* short\\_pair: no-no\n* chrF2\\_score: 0.784\n* bleu: 58.4\n* brevity\\_penalty: 0.988\n* ref\\_len: 6351.0\n* src\\_name: Norwegian\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: nor-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nor-nor\n\n\n* source group: Norwegian\n* target group: Norwegian\n* OPUS readme: nor-nor\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): nno nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 58.4, chr-F: 0.784", "### System Info:\n\n\n* hf\\_name: nor-nor\n* source\\_languages: nor\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: nor\n* short\\_pair: no-no\n* chrF2\\_score: 0.784\n* bleu: 58.4\n* brevity\\_penalty: 0.988\n* ref\\_len: 6351.0\n* src\\_name: Norwegian\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: nor-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 49, 164, 399 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nor-nor\n\n\n* source group: Norwegian\n* target group: Norwegian\n* OPUS readme: nor-nor\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): nno nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 58.4, chr-F: 0.784### System Info:\n\n\n* hf\\_name: nor-nor\n* source\\_languages: nor\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: nor\n* short\\_pair: no-no\n* chrF2\\_score: 0.784\n* bleu: 58.4\n* brevity\\_penalty: 0.988\n* ref\\_len: 6351.0\n* src\\_name: Norwegian\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: nor-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### nor-pol * source group: Norwegian * target group: Polish * OPUS readme: [nor-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-pol/README.md) * model: transformer-align * source language(s): nob * target language(s): pol * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nor.pol | 20.9 | 0.455 | ### System Info: - hf_name: nor-pol - source_languages: nor - target_languages: pol - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-pol/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['no', 'pl'] - src_constituents: {'nob', 'nno'} - tgt_constituents: {'pol'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.test.txt - src_alpha3: nor - tgt_alpha3: pol - short_pair: no-pl - chrF2_score: 0.455 - bleu: 20.9 - brevity_penalty: 0.941 - ref_len: 1828.0 - src_name: Norwegian - tgt_name: Polish - train_date: 2020-06-17 - src_alpha2: no - tgt_alpha2: pl - prefer_old: False - long_pair: nor-pol - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": [false, "pl"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-no-pl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "no", "pl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "no", "pl" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #no #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nor-pol * source group: Norwegian * target group: Polish * OPUS readme: nor-pol * model: transformer-align * source language(s): nob * target language(s): pol * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.9, chr-F: 0.455 ### System Info: * hf\_name: nor-pol * source\_languages: nor * target\_languages: pol * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['no', 'pl'] * src\_constituents: {'nob', 'nno'} * tgt\_constituents: {'pol'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nor * tgt\_alpha3: pol * short\_pair: no-pl * chrF2\_score: 0.455 * bleu: 20.9 * brevity\_penalty: 0.941 * ref\_len: 1828.0 * src\_name: Norwegian * tgt\_name: Polish * train\_date: 2020-06-17 * src\_alpha2: no * tgt\_alpha2: pl * prefer\_old: False * long\_pair: nor-pol * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nor-pol\n\n\n* source group: Norwegian\n* target group: Polish\n* OPUS readme: nor-pol\n* model: transformer-align\n* source language(s): nob\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.9, chr-F: 0.455", "### System Info:\n\n\n* hf\\_name: nor-pol\n* source\\_languages: nor\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'pl']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: pol\n* short\\_pair: no-pl\n* chrF2\\_score: 0.455\n* bleu: 20.9\n* brevity\\_penalty: 0.941\n* ref\\_len: 1828.0\n* src\\_name: Norwegian\n* tgt\\_name: Polish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: nor-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nor-pol\n\n\n* source group: Norwegian\n* target group: Polish\n* OPUS readme: nor-pol\n* model: transformer-align\n* source language(s): nob\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.9, chr-F: 0.455", "### System Info:\n\n\n* hf\\_name: nor-pol\n* source\\_languages: nor\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'pl']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: pol\n* short\\_pair: no-pl\n* chrF2\\_score: 0.455\n* bleu: 20.9\n* brevity\\_penalty: 0.941\n* ref\\_len: 1828.0\n* src\\_name: Norwegian\n* tgt\\_name: Polish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: nor-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 132, 396 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nor-pol\n\n\n* source group: Norwegian\n* target group: Polish\n* OPUS readme: nor-pol\n* model: transformer-align\n* source language(s): nob\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.9, chr-F: 0.455### System Info:\n\n\n* hf\\_name: nor-pol\n* source\\_languages: nor\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'pl']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: pol\n* short\\_pair: no-pl\n* chrF2\\_score: 0.455\n* bleu: 20.9\n* brevity\\_penalty: 0.941\n* ref\\_len: 1828.0\n* src\\_name: Norwegian\n* tgt\\_name: Polish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: nor-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### nor-rus * source group: Norwegian * target group: Russian * OPUS readme: [nor-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-rus/README.md) * model: transformer-align * source language(s): nno nob * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nor.rus | 18.6 | 0.400 | ### System Info: - hf_name: nor-rus - source_languages: nor - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['no', 'ru'] - src_constituents: {'nob', 'nno'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.test.txt - src_alpha3: nor - tgt_alpha3: rus - short_pair: no-ru - chrF2_score: 0.4 - bleu: 18.6 - brevity_penalty: 0.958 - ref_len: 10671.0 - src_name: Norwegian - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: no - tgt_alpha2: ru - prefer_old: False - long_pair: nor-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": [false, "ru"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-no-ru
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "no", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "no", "ru" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #no #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nor-rus * source group: Norwegian * target group: Russian * OPUS readme: nor-rus * model: transformer-align * source language(s): nno nob * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 18.6, chr-F: 0.400 ### System Info: * hf\_name: nor-rus * source\_languages: nor * target\_languages: rus * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['no', 'ru'] * src\_constituents: {'nob', 'nno'} * tgt\_constituents: {'rus'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nor * tgt\_alpha3: rus * short\_pair: no-ru * chrF2\_score: 0.4 * bleu: 18.6 * brevity\_penalty: 0.958 * ref\_len: 10671.0 * src\_name: Norwegian * tgt\_name: Russian * train\_date: 2020-06-17 * src\_alpha2: no * tgt\_alpha2: ru * prefer\_old: False * long\_pair: nor-rus * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nor-rus\n\n\n* source group: Norwegian\n* target group: Russian\n* OPUS readme: nor-rus\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.6, chr-F: 0.400", "### System Info:\n\n\n* hf\\_name: nor-rus\n* source\\_languages: nor\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'ru']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: rus\n* short\\_pair: no-ru\n* chrF2\\_score: 0.4\n* bleu: 18.6\n* brevity\\_penalty: 0.958\n* ref\\_len: 10671.0\n* src\\_name: Norwegian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: nor-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nor-rus\n\n\n* source group: Norwegian\n* target group: Russian\n* OPUS readme: nor-rus\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.6, chr-F: 0.400", "### System Info:\n\n\n* hf\\_name: nor-rus\n* source\\_languages: nor\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'ru']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: rus\n* short\\_pair: no-ru\n* chrF2\\_score: 0.4\n* bleu: 18.6\n* brevity\\_penalty: 0.958\n* ref\\_len: 10671.0\n* src\\_name: Norwegian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: nor-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 133, 397 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nor-rus\n\n\n* source group: Norwegian\n* target group: Russian\n* OPUS readme: nor-rus\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 18.6, chr-F: 0.400### System Info:\n\n\n* hf\\_name: nor-rus\n* source\\_languages: nor\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'ru']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: rus\n* short\\_pair: no-ru\n* chrF2\\_score: 0.4\n* bleu: 18.6\n* brevity\\_penalty: 0.958\n* ref\\_len: 10671.0\n* src\\_name: Norwegian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: nor-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### nor-swe * source group: Norwegian * target group: Swedish * OPUS readme: [nor-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-swe/README.md) * model: transformer-align * source language(s): nno nob * target language(s): swe * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nor.swe | 63.7 | 0.773 | ### System Info: - hf_name: nor-swe - source_languages: nor - target_languages: swe - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-swe/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['no', 'sv'] - src_constituents: {'nob', 'nno'} - tgt_constituents: {'swe'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.test.txt - src_alpha3: nor - tgt_alpha3: swe - short_pair: no-sv - chrF2_score: 0.773 - bleu: 63.7 - brevity_penalty: 0.9670000000000001 - ref_len: 3672.0 - src_name: Norwegian - tgt_name: Swedish - train_date: 2020-06-17 - src_alpha2: no - tgt_alpha2: sv - prefer_old: False - long_pair: nor-swe - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": [false, "sv"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-no-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "no", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "no", "sv" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #no #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nor-swe * source group: Norwegian * target group: Swedish * OPUS readme: nor-swe * model: transformer-align * source language(s): nno nob * target language(s): swe * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 63.7, chr-F: 0.773 ### System Info: * hf\_name: nor-swe * source\_languages: nor * target\_languages: swe * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['no', 'sv'] * src\_constituents: {'nob', 'nno'} * tgt\_constituents: {'swe'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nor * tgt\_alpha3: swe * short\_pair: no-sv * chrF2\_score: 0.773 * bleu: 63.7 * brevity\_penalty: 0.9670000000000001 * ref\_len: 3672.0 * src\_name: Norwegian * tgt\_name: Swedish * train\_date: 2020-06-17 * src\_alpha2: no * tgt\_alpha2: sv * prefer\_old: False * long\_pair: nor-swe * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nor-swe\n\n\n* source group: Norwegian\n* target group: Swedish\n* OPUS readme: nor-swe\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): swe\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 63.7, chr-F: 0.773", "### System Info:\n\n\n* hf\\_name: nor-swe\n* source\\_languages: nor\n* target\\_languages: swe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'sv']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'swe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: swe\n* short\\_pair: no-sv\n* chrF2\\_score: 0.773\n* bleu: 63.7\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 3672.0\n* src\\_name: Norwegian\n* tgt\\_name: Swedish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: sv\n* prefer\\_old: False\n* long\\_pair: nor-swe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nor-swe\n\n\n* source group: Norwegian\n* target group: Swedish\n* OPUS readme: nor-swe\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): swe\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 63.7, chr-F: 0.773", "### System Info:\n\n\n* hf\\_name: nor-swe\n* source\\_languages: nor\n* target\\_languages: swe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'sv']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'swe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: swe\n* short\\_pair: no-sv\n* chrF2\\_score: 0.773\n* bleu: 63.7\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 3672.0\n* src\\_name: Norwegian\n* tgt\\_name: Swedish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: sv\n* prefer\\_old: False\n* long\\_pair: nor-swe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 409 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nor-swe\n\n\n* source group: Norwegian\n* target group: Swedish\n* OPUS readme: nor-swe\n* model: transformer-align\n* source language(s): nno nob\n* target language(s): swe\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 63.7, chr-F: 0.773### System Info:\n\n\n* hf\\_name: nor-swe\n* source\\_languages: nor\n* target\\_languages: swe\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'sv']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'swe'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: swe\n* short\\_pair: no-sv\n* chrF2\\_score: 0.773\n* bleu: 63.7\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 3672.0\n* src\\_name: Norwegian\n* tgt\\_name: Swedish\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: sv\n* prefer\\_old: False\n* long\\_pair: nor-swe\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### nor-ukr * source group: Norwegian * target group: Ukrainian * OPUS readme: [nor-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-ukr/README.md) * model: transformer-align * source language(s): nob * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.nor.ukr | 16.6 | 0.384 | ### System Info: - hf_name: nor-ukr - source_languages: nor - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['no', 'uk'] - src_constituents: {'nob', 'nno'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.test.txt - src_alpha3: nor - tgt_alpha3: ukr - short_pair: no-uk - chrF2_score: 0.384 - bleu: 16.6 - brevity_penalty: 1.0 - ref_len: 3982.0 - src_name: Norwegian - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: no - tgt_alpha2: uk - prefer_old: False - long_pair: nor-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": [false, "uk"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-no-uk
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "no", "uk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "no", "uk" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #no #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### nor-ukr * source group: Norwegian * target group: Ukrainian * OPUS readme: nor-ukr * model: transformer-align * source language(s): nob * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 16.6, chr-F: 0.384 ### System Info: * hf\_name: nor-ukr * source\_languages: nor * target\_languages: ukr * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['no', 'uk'] * src\_constituents: {'nob', 'nno'} * tgt\_constituents: {'ukr'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: nor * tgt\_alpha3: ukr * short\_pair: no-uk * chrF2\_score: 0.384 * bleu: 16.6 * brevity\_penalty: 1.0 * ref\_len: 3982.0 * src\_name: Norwegian * tgt\_name: Ukrainian * train\_date: 2020-06-17 * src\_alpha2: no * tgt\_alpha2: uk * prefer\_old: False * long\_pair: nor-ukr * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### nor-ukr\n\n\n* source group: Norwegian\n* target group: Ukrainian\n* OPUS readme: nor-ukr\n* model: transformer-align\n* source language(s): nob\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.6, chr-F: 0.384", "### System Info:\n\n\n* hf\\_name: nor-ukr\n* source\\_languages: nor\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'uk']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: ukr\n* short\\_pair: no-uk\n* chrF2\\_score: 0.384\n* bleu: 16.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 3982.0\n* src\\_name: Norwegian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: nor-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### nor-ukr\n\n\n* source group: Norwegian\n* target group: Ukrainian\n* OPUS readme: nor-ukr\n* model: transformer-align\n* source language(s): nob\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.6, chr-F: 0.384", "### System Info:\n\n\n* hf\\_name: nor-ukr\n* source\\_languages: nor\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'uk']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: ukr\n* short\\_pair: no-uk\n* chrF2\\_score: 0.384\n* bleu: 16.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 3982.0\n* src\\_name: Norwegian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: nor-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 135, 402 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #no #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### nor-ukr\n\n\n* source group: Norwegian\n* target group: Ukrainian\n* OPUS readme: nor-ukr\n* model: transformer-align\n* source language(s): nob\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.6, chr-F: 0.384### System Info:\n\n\n* hf\\_name: nor-ukr\n* source\\_languages: nor\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['no', 'uk']\n* src\\_constituents: {'nob', 'nno'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: nor\n* tgt\\_alpha3: ukr\n* short\\_pair: no-uk\n* chrF2\\_score: 0.384\n* bleu: 16.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 3982.0\n* src\\_name: Norwegian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: no\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: nor-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-nso-de * source languages: nso * target languages: de * OPUS readme: [nso-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.nso.de | 24.7 | 0.461 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nso-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nso", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nso #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-nso-de * source languages: nso * target languages: de * OPUS readme: nso-de * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.7, chr-F: 0.461
[ "### opus-mt-nso-de\n\n\n* source languages: nso\n* target languages: de\n* OPUS readme: nso-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.7, chr-F: 0.461" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nso #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-nso-de\n\n\n* source languages: nso\n* target languages: de\n* OPUS readme: nso-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.7, chr-F: 0.461" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nso #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-nso-de\n\n\n* source languages: nso\n* target languages: de\n* OPUS readme: nso-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.7, chr-F: 0.461" ]
translation
transformers
### opus-mt-nso-en * source languages: nso * target languages: en * OPUS readme: [nso-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.nso.en | 48.6 | 0.634 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nso-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nso", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nso #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-nso-en * source languages: nso * target languages: en * OPUS readme: nso-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 48.6, chr-F: 0.634
[ "### opus-mt-nso-en\n\n\n* source languages: nso\n* target languages: en\n* OPUS readme: nso-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.6, chr-F: 0.634" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nso #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-nso-en\n\n\n* source languages: nso\n* target languages: en\n* OPUS readme: nso-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.6, chr-F: 0.634" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nso #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-nso-en\n\n\n* source languages: nso\n* target languages: en\n* OPUS readme: nso-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.6, chr-F: 0.634" ]
translation
transformers
### opus-mt-nso-es * source languages: nso * target languages: es * OPUS readme: [nso-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.nso.es | 29.5 | 0.485 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nso-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nso", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nso #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-nso-es * source languages: nso * target languages: es * OPUS readme: nso-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 29.5, chr-F: 0.485
[ "### opus-mt-nso-es\n\n\n* source languages: nso\n* target languages: es\n* OPUS readme: nso-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.5, chr-F: 0.485" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nso #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-nso-es\n\n\n* source languages: nso\n* target languages: es\n* OPUS readme: nso-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.5, chr-F: 0.485" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nso #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-nso-es\n\n\n* source languages: nso\n* target languages: es\n* OPUS readme: nso-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.5, chr-F: 0.485" ]
translation
transformers
### opus-mt-nso-fi * source languages: nso * target languages: fi * OPUS readme: [nso-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.nso.fi | 27.8 | 0.523 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nso-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nso", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nso #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-nso-fi * source languages: nso * target languages: fi * OPUS readme: nso-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.8, chr-F: 0.523
[ "### opus-mt-nso-fi\n\n\n* source languages: nso\n* target languages: fi\n* OPUS readme: nso-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.523" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nso #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-nso-fi\n\n\n* source languages: nso\n* target languages: fi\n* OPUS readme: nso-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.523" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nso #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-nso-fi\n\n\n* source languages: nso\n* target languages: fi\n* OPUS readme: nso-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.523" ]
translation
transformers
### opus-mt-nso-fr * source languages: nso * target languages: fr * OPUS readme: [nso-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.nso.fr | 30.7 | 0.488 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nso-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nso", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nso #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-nso-fr * source languages: nso * target languages: fr * OPUS readme: nso-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 30.7, chr-F: 0.488
[ "### opus-mt-nso-fr\n\n\n* source languages: nso\n* target languages: fr\n* OPUS readme: nso-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.488" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nso #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-nso-fr\n\n\n* source languages: nso\n* target languages: fr\n* OPUS readme: nso-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.488" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nso #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-nso-fr\n\n\n* source languages: nso\n* target languages: fr\n* OPUS readme: nso-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.488" ]
translation
transformers
### opus-mt-nso-sv * source languages: nso * target languages: sv * OPUS readme: [nso-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.nso.sv | 34.3 | 0.527 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nso-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nso", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nso #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-nso-sv * source languages: nso * target languages: sv * OPUS readme: nso-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 34.3, chr-F: 0.527
[ "### opus-mt-nso-sv\n\n\n* source languages: nso\n* target languages: sv\n* OPUS readme: nso-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.3, chr-F: 0.527" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nso #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-nso-sv\n\n\n* source languages: nso\n* target languages: sv\n* OPUS readme: nso-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.3, chr-F: 0.527" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nso #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-nso-sv\n\n\n* source languages: nso\n* target languages: sv\n* OPUS readme: nso-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.3, chr-F: 0.527" ]
translation
transformers
### opus-mt-ny-de * source languages: ny * target languages: de * OPUS readme: [ny-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-de/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-de/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-de/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ny.de | 23.9 | 0.440 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ny-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ny", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ny #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ny-de * source languages: ny * target languages: de * OPUS readme: ny-de * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.9, chr-F: 0.440
[ "### opus-mt-ny-de\n\n\n* source languages: ny\n* target languages: de\n* OPUS readme: ny-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.9, chr-F: 0.440" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ny #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ny-de\n\n\n* source languages: ny\n* target languages: de\n* OPUS readme: ny-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.9, chr-F: 0.440" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ny #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ny-de\n\n\n* source languages: ny\n* target languages: de\n* OPUS readme: ny-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.9, chr-F: 0.440" ]
translation
transformers
### opus-mt-ny-en * source languages: ny * target languages: en * OPUS readme: [ny-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ny.en | 39.7 | 0.547 | | Tatoeba.ny.en | 44.2 | 0.562 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ny-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ny", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ny-en * source languages: ny * target languages: en * OPUS readme: ny-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 39.7, chr-F: 0.547 testset: URL, BLEU: 44.2, chr-F: 0.562
[ "### opus-mt-ny-en\n\n\n* source languages: ny\n* target languages: en\n* OPUS readme: ny-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.7, chr-F: 0.547\ntestset: URL, BLEU: 44.2, chr-F: 0.562" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ny-en\n\n\n* source languages: ny\n* target languages: en\n* OPUS readme: ny-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.7, chr-F: 0.547\ntestset: URL, BLEU: 44.2, chr-F: 0.562" ]
[ 51, 129 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ny #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ny-en\n\n\n* source languages: ny\n* target languages: en\n* OPUS readme: ny-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.7, chr-F: 0.547\ntestset: URL, BLEU: 44.2, chr-F: 0.562" ]
translation
transformers
### opus-mt-ny-es * source languages: ny * target languages: es * OPUS readme: [ny-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ny.es | 27.9 | 0.457 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ny-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ny", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ny #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ny-es * source languages: ny * target languages: es * OPUS readme: ny-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.9, chr-F: 0.457
[ "### opus-mt-ny-es\n\n\n* source languages: ny\n* target languages: es\n* OPUS readme: ny-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.457" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ny #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ny-es\n\n\n* source languages: ny\n* target languages: es\n* OPUS readme: ny-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.457" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ny #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ny-es\n\n\n* source languages: ny\n* target languages: es\n* OPUS readme: ny-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.457" ]
translation
transformers
### opus-mt-nyk-en * source languages: nyk * target languages: en * OPUS readme: [nyk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nyk-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nyk-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.nyk.en | 27.3 | 0.423 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-nyk-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "nyk", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #nyk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-nyk-en * source languages: nyk * target languages: en * OPUS readme: nyk-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.3, chr-F: 0.423
[ "### opus-mt-nyk-en\n\n\n* source languages: nyk\n* target languages: en\n* OPUS readme: nyk-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.423" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nyk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-nyk-en\n\n\n* source languages: nyk\n* target languages: en\n* OPUS readme: nyk-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.423" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #nyk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-nyk-en\n\n\n* source languages: nyk\n* target languages: en\n* OPUS readme: nyk-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.423" ]
translation
transformers
### opus-mt-om-en * source languages: om * target languages: en * OPUS readme: [om-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/om-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.om.en | 27.3 | 0.448 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-om-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "om", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #om #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-om-en * source languages: om * target languages: en * OPUS readme: om-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.3, chr-F: 0.448
[ "### opus-mt-om-en\n\n\n* source languages: om\n* target languages: en\n* OPUS readme: om-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.448" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #om #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-om-en\n\n\n* source languages: om\n* target languages: en\n* OPUS readme: om-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.448" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #om #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-om-en\n\n\n* source languages: om\n* target languages: en\n* OPUS readme: om-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.448" ]
translation
transformers
### opus-mt-pa-en * source languages: pa * target languages: en * OPUS readme: [pa-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pa-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pa-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pa-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pa-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pa.en | 20.6 | 0.320 | | Tatoeba.pa.en | 29.3 | 0.464 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pa-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pa", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pa-en * source languages: pa * target languages: en * OPUS readme: pa-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.6, chr-F: 0.320 testset: URL, BLEU: 29.3, chr-F: 0.464
[ "### opus-mt-pa-en\n\n\n* source languages: pa\n* target languages: en\n* OPUS readme: pa-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.6, chr-F: 0.320\ntestset: URL, BLEU: 29.3, chr-F: 0.464" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pa-en\n\n\n* source languages: pa\n* target languages: en\n* OPUS readme: pa-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.6, chr-F: 0.320\ntestset: URL, BLEU: 29.3, chr-F: 0.464" ]
[ 51, 128 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pa-en\n\n\n* source languages: pa\n* target languages: en\n* OPUS readme: pa-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.6, chr-F: 0.320\ntestset: URL, BLEU: 29.3, chr-F: 0.464" ]
translation
transformers
### opus-mt-pag-de * source languages: pag * target languages: de * OPUS readme: [pag-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-de/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-de/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-de/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pag.de | 22.8 | 0.435 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pag-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pag", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pag #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pag-de * source languages: pag * target languages: de * OPUS readme: pag-de * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.8, chr-F: 0.435
[ "### opus-mt-pag-de\n\n\n* source languages: pag\n* target languages: de\n* OPUS readme: pag-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.435" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pag #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pag-de\n\n\n* source languages: pag\n* target languages: de\n* OPUS readme: pag-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.435" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pag #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pag-de\n\n\n* source languages: pag\n* target languages: de\n* OPUS readme: pag-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.435" ]
translation
transformers
### opus-mt-pag-en * source languages: pag * target languages: en * OPUS readme: [pag-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-en/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pag.en | 42.4 | 0.580 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pag-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pag", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pag #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pag-en * source languages: pag * target languages: en * OPUS readme: pag-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 42.4, chr-F: 0.580
[ "### opus-mt-pag-en\n\n\n* source languages: pag\n* target languages: en\n* OPUS readme: pag-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.4, chr-F: 0.580" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pag #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pag-en\n\n\n* source languages: pag\n* target languages: en\n* OPUS readme: pag-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.4, chr-F: 0.580" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pag #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pag-en\n\n\n* source languages: pag\n* target languages: en\n* OPUS readme: pag-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.4, chr-F: 0.580" ]
translation
transformers
### opus-mt-pag-es * source languages: pag * target languages: es * OPUS readme: [pag-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pag.es | 27.9 | 0.459 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pag-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pag", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pag #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pag-es * source languages: pag * target languages: es * OPUS readme: pag-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.9, chr-F: 0.459
[ "### opus-mt-pag-es\n\n\n* source languages: pag\n* target languages: es\n* OPUS readme: pag-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.459" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pag #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pag-es\n\n\n* source languages: pag\n* target languages: es\n* OPUS readme: pag-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.459" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pag #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pag-es\n\n\n* source languages: pag\n* target languages: es\n* OPUS readme: pag-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.459" ]
translation
transformers
### opus-mt-pag-fi * source languages: pag * target languages: fi * OPUS readme: [pag-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pag.fi | 26.7 | 0.496 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pag-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pag", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pag #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pag-fi * source languages: pag * target languages: fi * OPUS readme: pag-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.7, chr-F: 0.496
[ "### opus-mt-pag-fi\n\n\n* source languages: pag\n* target languages: fi\n* OPUS readme: pag-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.496" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pag #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pag-fi\n\n\n* source languages: pag\n* target languages: fi\n* OPUS readme: pag-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.496" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pag #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pag-fi\n\n\n* source languages: pag\n* target languages: fi\n* OPUS readme: pag-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.496" ]
translation
transformers
### opus-mt-pag-sv * source languages: pag * target languages: sv * OPUS readme: [pag-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pag.sv | 29.8 | 0.492 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pag-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pag", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pag #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pag-sv * source languages: pag * target languages: sv * OPUS readme: pag-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 29.8, chr-F: 0.492
[ "### opus-mt-pag-sv\n\n\n* source languages: pag\n* target languages: sv\n* OPUS readme: pag-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.8, chr-F: 0.492" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pag #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pag-sv\n\n\n* source languages: pag\n* target languages: sv\n* OPUS readme: pag-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.8, chr-F: 0.492" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pag #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pag-sv\n\n\n* source languages: pag\n* target languages: sv\n* OPUS readme: pag-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.8, chr-F: 0.492" ]
translation
transformers
### opus-mt-pap-de * source languages: pap * target languages: de * OPUS readme: [pap-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-de/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-de/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-de/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pap.de | 25.0 | 0.466 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pap-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pap", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pap #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pap-de * source languages: pap * target languages: de * OPUS readme: pap-de * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.0, chr-F: 0.466
[ "### opus-mt-pap-de\n\n\n* source languages: pap\n* target languages: de\n* OPUS readme: pap-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.466" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pap #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pap-de\n\n\n* source languages: pap\n* target languages: de\n* OPUS readme: pap-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.466" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pap #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pap-de\n\n\n* source languages: pap\n* target languages: de\n* OPUS readme: pap-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.0, chr-F: 0.466" ]
translation
transformers
### opus-mt-pap-en * source languages: pap * target languages: en * OPUS readme: [pap-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pap.en | 47.3 | 0.634 | | Tatoeba.pap.en | 63.2 | 0.684 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pap-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pap", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pap #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pap-en * source languages: pap * target languages: en * OPUS readme: pap-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 47.3, chr-F: 0.634 testset: URL, BLEU: 63.2, chr-F: 0.684
[ "### opus-mt-pap-en\n\n\n* source languages: pap\n* target languages: en\n* OPUS readme: pap-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.3, chr-F: 0.634\ntestset: URL, BLEU: 63.2, chr-F: 0.684" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pap #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pap-en\n\n\n* source languages: pap\n* target languages: en\n* OPUS readme: pap-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.3, chr-F: 0.634\ntestset: URL, BLEU: 63.2, chr-F: 0.684" ]
[ 52, 132 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pap #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pap-en\n\n\n* source languages: pap\n* target languages: en\n* OPUS readme: pap-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.3, chr-F: 0.634\ntestset: URL, BLEU: 63.2, chr-F: 0.684" ]
translation
transformers
### opus-mt-pap-es * source languages: pap * target languages: es * OPUS readme: [pap-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pap.es | 32.3 | 0.518 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pap-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pap", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pap #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pap-es * source languages: pap * target languages: es * OPUS readme: pap-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 32.3, chr-F: 0.518
[ "### opus-mt-pap-es\n\n\n* source languages: pap\n* target languages: es\n* OPUS readme: pap-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.3, chr-F: 0.518" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pap #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pap-es\n\n\n* source languages: pap\n* target languages: es\n* OPUS readme: pap-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.3, chr-F: 0.518" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pap #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pap-es\n\n\n* source languages: pap\n* target languages: es\n* OPUS readme: pap-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.3, chr-F: 0.518" ]
translation
transformers
### opus-mt-pap-fi * source languages: pap * target languages: fi * OPUS readme: [pap-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-fi/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fi/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fi/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pap.fi | 27.7 | 0.520 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pap-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pap", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pap #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pap-fi * source languages: pap * target languages: fi * OPUS readme: pap-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.7, chr-F: 0.520
[ "### opus-mt-pap-fi\n\n\n* source languages: pap\n* target languages: fi\n* OPUS readme: pap-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.520" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pap #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pap-fi\n\n\n* source languages: pap\n* target languages: fi\n* OPUS readme: pap-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.520" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pap #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pap-fi\n\n\n* source languages: pap\n* target languages: fi\n* OPUS readme: pap-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.520" ]
translation
transformers
### opus-mt-pap-fr * source languages: pap * target languages: fr * OPUS readme: [pap-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pap.fr | 31.0 | 0.498 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pap-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pap", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pap #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pap-fr * source languages: pap * target languages: fr * OPUS readme: pap-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 31.0, chr-F: 0.498
[ "### opus-mt-pap-fr\n\n\n* source languages: pap\n* target languages: fr\n* OPUS readme: pap-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.0, chr-F: 0.498" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pap #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pap-fr\n\n\n* source languages: pap\n* target languages: fr\n* OPUS readme: pap-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.0, chr-F: 0.498" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pap #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pap-fr\n\n\n* source languages: pap\n* target languages: fr\n* OPUS readme: pap-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.0, chr-F: 0.498" ]
translation
transformers
### phi-eng * source group: Philippine languages * target group: English * OPUS readme: [phi-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/phi-eng/README.md) * model: transformer * source language(s): akl_Latn ceb hil ilo pag war * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.akl-eng.akl.eng | 11.6 | 0.321 | | Tatoeba-test.ceb-eng.ceb.eng | 21.7 | 0.393 | | Tatoeba-test.hil-eng.hil.eng | 17.6 | 0.371 | | Tatoeba-test.ilo-eng.ilo.eng | 36.6 | 0.560 | | Tatoeba-test.multi.eng | 21.5 | 0.391 | | Tatoeba-test.pag-eng.pag.eng | 27.5 | 0.494 | | Tatoeba-test.war-eng.war.eng | 17.3 | 0.380 | ### System Info: - hf_name: phi-eng - source_languages: phi - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/phi-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['phi', 'en'] - src_constituents: {'ilo', 'akl_Latn', 'war', 'hil', 'pag', 'ceb'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/phi-eng/opus2m-2020-08-01.test.txt - src_alpha3: phi - tgt_alpha3: eng - short_pair: phi-en - chrF2_score: 0.391 - bleu: 21.5 - brevity_penalty: 1.0 - ref_len: 2380.0 - src_name: Philippine languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: phi - tgt_alpha2: en - prefer_old: False - long_pair: phi-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["phi", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-phi-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "phi", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "phi", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #phi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### phi-eng * source group: Philippine languages * target group: English * OPUS readme: phi-eng * model: transformer * source language(s): akl\_Latn ceb hil ilo pag war * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 11.6, chr-F: 0.321 testset: URL, BLEU: 21.7, chr-F: 0.393 testset: URL, BLEU: 17.6, chr-F: 0.371 testset: URL, BLEU: 36.6, chr-F: 0.560 testset: URL, BLEU: 21.5, chr-F: 0.391 testset: URL, BLEU: 27.5, chr-F: 0.494 testset: URL, BLEU: 17.3, chr-F: 0.380 ### System Info: * hf\_name: phi-eng * source\_languages: phi * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['phi', 'en'] * src\_constituents: {'ilo', 'akl\_Latn', 'war', 'hil', 'pag', 'ceb'} * tgt\_constituents: {'eng'} * src\_multilingual: True * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm12k,spm12k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: phi * tgt\_alpha3: eng * short\_pair: phi-en * chrF2\_score: 0.391 * bleu: 21.5 * brevity\_penalty: 1.0 * ref\_len: 2380.0 * src\_name: Philippine languages * tgt\_name: English * train\_date: 2020-08-01 * src\_alpha2: phi * tgt\_alpha2: en * prefer\_old: False * long\_pair: phi-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### phi-eng\n\n\n* source group: Philippine languages\n* target group: English\n* OPUS readme: phi-eng\n* model: transformer\n* source language(s): akl\\_Latn ceb hil ilo pag war\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.6, chr-F: 0.321\ntestset: URL, BLEU: 21.7, chr-F: 0.393\ntestset: URL, BLEU: 17.6, chr-F: 0.371\ntestset: URL, BLEU: 36.6, chr-F: 0.560\ntestset: URL, BLEU: 21.5, chr-F: 0.391\ntestset: URL, BLEU: 27.5, chr-F: 0.494\ntestset: URL, BLEU: 17.3, chr-F: 0.380", "### System Info:\n\n\n* hf\\_name: phi-eng\n* source\\_languages: phi\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['phi', 'en']\n* src\\_constituents: {'ilo', 'akl\\_Latn', 'war', 'hil', 'pag', 'ceb'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: phi\n* tgt\\_alpha3: eng\n* short\\_pair: phi-en\n* chrF2\\_score: 0.391\n* bleu: 21.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 2380.0\n* src\\_name: Philippine languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: phi\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: phi-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #phi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### phi-eng\n\n\n* source group: Philippine languages\n* target group: English\n* OPUS readme: phi-eng\n* model: transformer\n* source language(s): akl\\_Latn ceb hil ilo pag war\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.6, chr-F: 0.321\ntestset: URL, BLEU: 21.7, chr-F: 0.393\ntestset: URL, BLEU: 17.6, chr-F: 0.371\ntestset: URL, BLEU: 36.6, chr-F: 0.560\ntestset: URL, BLEU: 21.5, chr-F: 0.391\ntestset: URL, BLEU: 27.5, chr-F: 0.494\ntestset: URL, BLEU: 17.3, chr-F: 0.380", "### System Info:\n\n\n* hf\\_name: phi-eng\n* source\\_languages: phi\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['phi', 'en']\n* src\\_constituents: {'ilo', 'akl\\_Latn', 'war', 'hil', 'pag', 'ceb'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: phi\n* tgt\\_alpha3: eng\n* short\\_pair: phi-en\n* chrF2\\_score: 0.391\n* bleu: 21.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 2380.0\n* src\\_name: Philippine languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: phi\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: phi-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 278, 421 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #phi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### phi-eng\n\n\n* source group: Philippine languages\n* target group: English\n* OPUS readme: phi-eng\n* model: transformer\n* source language(s): akl\\_Latn ceb hil ilo pag war\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.6, chr-F: 0.321\ntestset: URL, BLEU: 21.7, chr-F: 0.393\ntestset: URL, BLEU: 17.6, chr-F: 0.371\ntestset: URL, BLEU: 36.6, chr-F: 0.560\ntestset: URL, BLEU: 21.5, chr-F: 0.391\ntestset: URL, BLEU: 27.5, chr-F: 0.494\ntestset: URL, BLEU: 17.3, chr-F: 0.380### System Info:\n\n\n* hf\\_name: phi-eng\n* source\\_languages: phi\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['phi', 'en']\n* src\\_constituents: {'ilo', 'akl\\_Latn', 'war', 'hil', 'pag', 'ceb'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: phi\n* tgt\\_alpha3: eng\n* short\\_pair: phi-en\n* chrF2\\_score: 0.391\n* bleu: 21.5\n* brevity\\_penalty: 1.0\n* ref\\_len: 2380.0\n* src\\_name: Philippine languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: phi\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: phi-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-pis-en * source languages: pis * target languages: en * OPUS readme: [pis-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pis.en | 33.3 | 0.493 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pis-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pis", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pis #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pis-en * source languages: pis * target languages: en * OPUS readme: pis-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 33.3, chr-F: 0.493
[ "### opus-mt-pis-en\n\n\n* source languages: pis\n* target languages: en\n* OPUS readme: pis-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.3, chr-F: 0.493" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pis #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pis-en\n\n\n* source languages: pis\n* target languages: en\n* OPUS readme: pis-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.3, chr-F: 0.493" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pis #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pis-en\n\n\n* source languages: pis\n* target languages: en\n* OPUS readme: pis-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.3, chr-F: 0.493" ]
translation
transformers
### opus-mt-pis-es * source languages: pis * target languages: es * OPUS readme: [pis-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pis.es | 24.1 | 0.421 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pis-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pis", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pis #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pis-es * source languages: pis * target languages: es * OPUS readme: pis-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.1, chr-F: 0.421
[ "### opus-mt-pis-es\n\n\n* source languages: pis\n* target languages: es\n* OPUS readme: pis-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.421" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pis #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pis-es\n\n\n* source languages: pis\n* target languages: es\n* OPUS readme: pis-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.421" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pis #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pis-es\n\n\n* source languages: pis\n* target languages: es\n* OPUS readme: pis-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.421" ]
translation
transformers
### opus-mt-pis-fi * source languages: pis * target languages: fi * OPUS readme: [pis-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fi/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pis.fi | 21.8 | 0.439 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pis-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pis", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pis #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pis-fi * source languages: pis * target languages: fi * OPUS readme: pis-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 21.8, chr-F: 0.439
[ "### opus-mt-pis-fi\n\n\n* source languages: pis\n* target languages: fi\n* OPUS readme: pis-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.439" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pis #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pis-fi\n\n\n* source languages: pis\n* target languages: fi\n* OPUS readme: pis-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.439" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pis #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pis-fi\n\n\n* source languages: pis\n* target languages: fi\n* OPUS readme: pis-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.439" ]
translation
transformers
### opus-mt-pis-fr * source languages: pis * target languages: fr * OPUS readme: [pis-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pis.fr | 24.9 | 0.421 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pis-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pis", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pis #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pis-fr * source languages: pis * target languages: fr * OPUS readme: pis-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.9, chr-F: 0.421
[ "### opus-mt-pis-fr\n\n\n* source languages: pis\n* target languages: fr\n* OPUS readme: pis-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.421" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pis #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pis-fr\n\n\n* source languages: pis\n* target languages: fr\n* OPUS readme: pis-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.421" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pis #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pis-fr\n\n\n* source languages: pis\n* target languages: fr\n* OPUS readme: pis-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.9, chr-F: 0.421" ]
translation
transformers
### opus-mt-pis-sv * source languages: pis * target languages: sv * OPUS readme: [pis-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pis-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pis-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pis-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.pis.sv | 25.9 | 0.442 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pis-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pis", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pis #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pis-sv * source languages: pis * target languages: sv * OPUS readme: pis-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.9, chr-F: 0.442
[ "### opus-mt-pis-sv\n\n\n* source languages: pis\n* target languages: sv\n* OPUS readme: pis-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.9, chr-F: 0.442" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pis #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pis-sv\n\n\n* source languages: pis\n* target languages: sv\n* OPUS readme: pis-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.9, chr-F: 0.442" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pis #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pis-sv\n\n\n* source languages: pis\n* target languages: sv\n* OPUS readme: pis-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.9, chr-F: 0.442" ]
translation
transformers
### pol-ara * source group: Polish * target group: Arabic * OPUS readme: [pol-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-ara/README.md) * model: transformer * source language(s): pol * target language(s): ara arz * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.pol.ara | 20.4 | 0.491 | ### System Info: - hf_name: pol-ara - source_languages: pol - target_languages: ara - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-ara/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['pl', 'ar'] - src_constituents: {'pol'} - tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ara/opus-2020-07-03.test.txt - src_alpha3: pol - tgt_alpha3: ara - short_pair: pl-ar - chrF2_score: 0.491 - bleu: 20.4 - brevity_penalty: 0.9590000000000001 - ref_len: 1028.0 - src_name: Polish - tgt_name: Arabic - train_date: 2020-07-03 - src_alpha2: pl - tgt_alpha2: ar - prefer_old: False - long_pair: pol-ara - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["pl", "ar"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pl-ar
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pl", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "pl", "ar" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pl #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### pol-ara * source group: Polish * target group: Arabic * OPUS readme: pol-ara * model: transformer * source language(s): pol * target language(s): ara arz * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.4, chr-F: 0.491 ### System Info: * hf\_name: pol-ara * source\_languages: pol * target\_languages: ara * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['pl', 'ar'] * src\_constituents: {'pol'} * tgt\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: pol * tgt\_alpha3: ara * short\_pair: pl-ar * chrF2\_score: 0.491 * bleu: 20.4 * brevity\_penalty: 0.9590000000000001 * ref\_len: 1028.0 * src\_name: Polish * tgt\_name: Arabic * train\_date: 2020-07-03 * src\_alpha2: pl * tgt\_alpha2: ar * prefer\_old: False * long\_pair: pol-ara * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### pol-ara\n\n\n* source group: Polish\n* target group: Arabic\n* OPUS readme: pol-ara\n* model: transformer\n* source language(s): pol\n* target language(s): ara arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.4, chr-F: 0.491", "### System Info:\n\n\n* hf\\_name: pol-ara\n* source\\_languages: pol\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'ar']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: ara\n* short\\_pair: pl-ar\n* chrF2\\_score: 0.491\n* bleu: 20.4\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 1028.0\n* src\\_name: Polish\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: pl\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: pol-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### pol-ara\n\n\n* source group: Polish\n* target group: Arabic\n* OPUS readme: pol-ara\n* model: transformer\n* source language(s): pol\n* target language(s): ara arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.4, chr-F: 0.491", "### System Info:\n\n\n* hf\\_name: pol-ara\n* source\\_languages: pol\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'ar']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: ara\n* short\\_pair: pl-ar\n* chrF2\\_score: 0.491\n* bleu: 20.4\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 1028.0\n* src\\_name: Polish\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: pl\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: pol-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 156, 445 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### pol-ara\n\n\n* source group: Polish\n* target group: Arabic\n* OPUS readme: pol-ara\n* model: transformer\n* source language(s): pol\n* target language(s): ara arz\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.4, chr-F: 0.491### System Info:\n\n\n* hf\\_name: pol-ara\n* source\\_languages: pol\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'ar']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: ara\n* short\\_pair: pl-ar\n* chrF2\\_score: 0.491\n* bleu: 20.4\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 1028.0\n* src\\_name: Polish\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: pl\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: pol-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-pl-de * source languages: pl * target languages: de * OPUS readme: [pl-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-de/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-de/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-de/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.pl.de | 47.8 | 0.665 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pl-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pl", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pl #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pl-de * source languages: pl * target languages: de * OPUS readme: pl-de * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 47.8, chr-F: 0.665
[ "### opus-mt-pl-de\n\n\n* source languages: pl\n* target languages: de\n* OPUS readme: pl-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.8, chr-F: 0.665" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pl-de\n\n\n* source languages: pl\n* target languages: de\n* OPUS readme: pl-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.8, chr-F: 0.665" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pl-de\n\n\n* source languages: pl\n* target languages: de\n* OPUS readme: pl-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.8, chr-F: 0.665" ]
translation
transformers
### opus-mt-pl-en * source languages: pl * target languages: en * OPUS readme: [pl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.pl.en | 54.9 | 0.701 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pl-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pl-en * source languages: pl * target languages: en * OPUS readme: pl-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 54.9, chr-F: 0.701
[ "### opus-mt-pl-en\n\n\n* source languages: pl\n* target languages: en\n* OPUS readme: pl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.9, chr-F: 0.701" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pl-en\n\n\n* source languages: pl\n* target languages: en\n* OPUS readme: pl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.9, chr-F: 0.701" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pl-en\n\n\n* source languages: pl\n* target languages: en\n* OPUS readme: pl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 54.9, chr-F: 0.701" ]
translation
transformers
### pol-epo * source group: Polish * target group: Esperanto * OPUS readme: [pol-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-epo/README.md) * model: transformer-align * source language(s): pol * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.pol.epo | 24.8 | 0.451 | ### System Info: - hf_name: pol-epo - source_languages: pol - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['pl', 'eo'] - src_constituents: {'pol'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-epo/opus-2020-06-16.test.txt - src_alpha3: pol - tgt_alpha3: epo - short_pair: pl-eo - chrF2_score: 0.451 - bleu: 24.8 - brevity_penalty: 0.9670000000000001 - ref_len: 17191.0 - src_name: Polish - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: pl - tgt_alpha2: eo - prefer_old: False - long_pair: pol-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["pl", "eo"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pl-eo
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pl", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "pl", "eo" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pl #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### pol-epo * source group: Polish * target group: Esperanto * OPUS readme: pol-epo * model: transformer-align * source language(s): pol * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.8, chr-F: 0.451 ### System Info: * hf\_name: pol-epo * source\_languages: pol * target\_languages: epo * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['pl', 'eo'] * src\_constituents: {'pol'} * tgt\_constituents: {'epo'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: pol * tgt\_alpha3: epo * short\_pair: pl-eo * chrF2\_score: 0.451 * bleu: 24.8 * brevity\_penalty: 0.9670000000000001 * ref\_len: 17191.0 * src\_name: Polish * tgt\_name: Esperanto * train\_date: 2020-06-16 * src\_alpha2: pl * tgt\_alpha2: eo * prefer\_old: False * long\_pair: pol-epo * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### pol-epo\n\n\n* source group: Polish\n* target group: Esperanto\n* OPUS readme: pol-epo\n* model: transformer-align\n* source language(s): pol\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.8, chr-F: 0.451", "### System Info:\n\n\n* hf\\_name: pol-epo\n* source\\_languages: pol\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'eo']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: epo\n* short\\_pair: pl-eo\n* chrF2\\_score: 0.451\n* bleu: 24.8\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 17191.0\n* src\\_name: Polish\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: pl\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: pol-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### pol-epo\n\n\n* source group: Polish\n* target group: Esperanto\n* OPUS readme: pol-epo\n* model: transformer-align\n* source language(s): pol\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.8, chr-F: 0.451", "### System Info:\n\n\n* hf\\_name: pol-epo\n* source\\_languages: pol\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'eo']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: epo\n* short\\_pair: pl-eo\n* chrF2\\_score: 0.451\n* bleu: 24.8\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 17191.0\n* src\\_name: Polish\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: pl\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: pol-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 52, 135, 406 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### pol-epo\n\n\n* source group: Polish\n* target group: Esperanto\n* OPUS readme: pol-epo\n* model: transformer-align\n* source language(s): pol\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.8, chr-F: 0.451### System Info:\n\n\n* hf\\_name: pol-epo\n* source\\_languages: pol\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'eo']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: epo\n* short\\_pair: pl-eo\n* chrF2\\_score: 0.451\n* bleu: 24.8\n* brevity\\_penalty: 0.9670000000000001\n* ref\\_len: 17191.0\n* src\\_name: Polish\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: pl\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: pol-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-pl-es * source languages: pl * target languages: es * OPUS readme: [pl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-es/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.pl.es | 46.9 | 0.654 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pl-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pl", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pl-es * source languages: pl * target languages: es * OPUS readme: pl-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 46.9, chr-F: 0.654
[ "### opus-mt-pl-es\n\n\n* source languages: pl\n* target languages: es\n* OPUS readme: pl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.9, chr-F: 0.654" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pl-es\n\n\n* source languages: pl\n* target languages: es\n* OPUS readme: pl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.9, chr-F: 0.654" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pl-es\n\n\n* source languages: pl\n* target languages: es\n* OPUS readme: pl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.9, chr-F: 0.654" ]
translation
transformers
### opus-mt-pl-fr * source languages: pl * target languages: fr * OPUS readme: [pl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.pl.fr | 49.0 | 0.659 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pl-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pl", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pl-fr * source languages: pl * target languages: fr * OPUS readme: pl-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 49.0, chr-F: 0.659
[ "### opus-mt-pl-fr\n\n\n* source languages: pl\n* target languages: fr\n* OPUS readme: pl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.0, chr-F: 0.659" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pl-fr\n\n\n* source languages: pl\n* target languages: fr\n* OPUS readme: pl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.0, chr-F: 0.659" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pl-fr\n\n\n* source languages: pl\n* target languages: fr\n* OPUS readme: pl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.0, chr-F: 0.659" ]
translation
transformers
### pol-lit * source group: Polish * target group: Lithuanian * OPUS readme: [pol-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-lit/README.md) * model: transformer-align * source language(s): pol * target language(s): lit * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-lit/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-lit/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-lit/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.pol.lit | 43.7 | 0.688 | ### System Info: - hf_name: pol-lit - source_languages: pol - target_languages: lit - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-lit/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['pl', 'lt'] - src_constituents: {'pol'} - tgt_constituents: {'lit'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-lit/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-lit/opus-2020-06-17.test.txt - src_alpha3: pol - tgt_alpha3: lit - short_pair: pl-lt - chrF2_score: 0.688 - bleu: 43.7 - brevity_penalty: 0.981 - ref_len: 10084.0 - src_name: Polish - tgt_name: Lithuanian - train_date: 2020-06-17 - src_alpha2: pl - tgt_alpha2: lt - prefer_old: False - long_pair: pol-lit - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["pl", "lt"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pl-lt
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pl", "lt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "pl", "lt" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pl #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### pol-lit * source group: Polish * target group: Lithuanian * OPUS readme: pol-lit * model: transformer-align * source language(s): pol * target language(s): lit * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 43.7, chr-F: 0.688 ### System Info: * hf\_name: pol-lit * source\_languages: pol * target\_languages: lit * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['pl', 'lt'] * src\_constituents: {'pol'} * tgt\_constituents: {'lit'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: pol * tgt\_alpha3: lit * short\_pair: pl-lt * chrF2\_score: 0.688 * bleu: 43.7 * brevity\_penalty: 0.981 * ref\_len: 10084.0 * src\_name: Polish * tgt\_name: Lithuanian * train\_date: 2020-06-17 * src\_alpha2: pl * tgt\_alpha2: lt * prefer\_old: False * long\_pair: pol-lit * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### pol-lit\n\n\n* source group: Polish\n* target group: Lithuanian\n* OPUS readme: pol-lit\n* model: transformer-align\n* source language(s): pol\n* target language(s): lit\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.7, chr-F: 0.688", "### System Info:\n\n\n* hf\\_name: pol-lit\n* source\\_languages: pol\n* target\\_languages: lit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'lt']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'lit'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: lit\n* short\\_pair: pl-lt\n* chrF2\\_score: 0.688\n* bleu: 43.7\n* brevity\\_penalty: 0.981\n* ref\\_len: 10084.0\n* src\\_name: Polish\n* tgt\\_name: Lithuanian\n* train\\_date: 2020-06-17\n* src\\_alpha2: pl\n* tgt\\_alpha2: lt\n* prefer\\_old: False\n* long\\_pair: pol-lit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### pol-lit\n\n\n* source group: Polish\n* target group: Lithuanian\n* OPUS readme: pol-lit\n* model: transformer-align\n* source language(s): pol\n* target language(s): lit\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.7, chr-F: 0.688", "### System Info:\n\n\n* hf\\_name: pol-lit\n* source\\_languages: pol\n* target\\_languages: lit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'lt']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'lit'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: lit\n* short\\_pair: pl-lt\n* chrF2\\_score: 0.688\n* bleu: 43.7\n* brevity\\_penalty: 0.981\n* ref\\_len: 10084.0\n* src\\_name: Polish\n* tgt\\_name: Lithuanian\n* train\\_date: 2020-06-17\n* src\\_alpha2: pl\n* tgt\\_alpha2: lt\n* prefer\\_old: False\n* long\\_pair: pol-lit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 131, 392 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### pol-lit\n\n\n* source group: Polish\n* target group: Lithuanian\n* OPUS readme: pol-lit\n* model: transformer-align\n* source language(s): pol\n* target language(s): lit\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.7, chr-F: 0.688### System Info:\n\n\n* hf\\_name: pol-lit\n* source\\_languages: pol\n* target\\_languages: lit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'lt']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'lit'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: lit\n* short\\_pair: pl-lt\n* chrF2\\_score: 0.688\n* bleu: 43.7\n* brevity\\_penalty: 0.981\n* ref\\_len: 10084.0\n* src\\_name: Polish\n* tgt\\_name: Lithuanian\n* train\\_date: 2020-06-17\n* src\\_alpha2: pl\n* tgt\\_alpha2: lt\n* prefer\\_old: False\n* long\\_pair: pol-lit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### pol-nor * source group: Polish * target group: Norwegian * OPUS readme: [pol-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-nor/README.md) * model: transformer-align * source language(s): pol * target language(s): nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.pol.nor | 27.5 | 0.479 | ### System Info: - hf_name: pol-nor - source_languages: pol - target_languages: nor - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-nor/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['pl', 'no'] - src_constituents: {'pol'} - tgt_constituents: {'nob', 'nno'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.test.txt - src_alpha3: pol - tgt_alpha3: nor - short_pair: pl-no - chrF2_score: 0.479 - bleu: 27.5 - brevity_penalty: 0.9690000000000001 - ref_len: 2045.0 - src_name: Polish - tgt_name: Norwegian - train_date: 2020-06-17 - src_alpha2: pl - tgt_alpha2: no - prefer_old: False - long_pair: pol-nor - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["pl", false], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pl-no
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pl", "no", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "pl", "no" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pl #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### pol-nor * source group: Polish * target group: Norwegian * OPUS readme: pol-nor * model: transformer-align * source language(s): pol * target language(s): nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.5, chr-F: 0.479 ### System Info: * hf\_name: pol-nor * source\_languages: pol * target\_languages: nor * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['pl', 'no'] * src\_constituents: {'pol'} * tgt\_constituents: {'nob', 'nno'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: pol * tgt\_alpha3: nor * short\_pair: pl-no * chrF2\_score: 0.479 * bleu: 27.5 * brevity\_penalty: 0.9690000000000001 * ref\_len: 2045.0 * src\_name: Polish * tgt\_name: Norwegian * train\_date: 2020-06-17 * src\_alpha2: pl * tgt\_alpha2: no * prefer\_old: False * long\_pair: pol-nor * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### pol-nor\n\n\n* source group: Polish\n* target group: Norwegian\n* OPUS readme: pol-nor\n* model: transformer-align\n* source language(s): pol\n* target language(s): nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.479", "### System Info:\n\n\n* hf\\_name: pol-nor\n* source\\_languages: pol\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'no']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: nor\n* short\\_pair: pl-no\n* chrF2\\_score: 0.479\n* bleu: 27.5\n* brevity\\_penalty: 0.9690000000000001\n* ref\\_len: 2045.0\n* src\\_name: Polish\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: pl\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: pol-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### pol-nor\n\n\n* source group: Polish\n* target group: Norwegian\n* OPUS readme: pol-nor\n* model: transformer-align\n* source language(s): pol\n* target language(s): nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.479", "### System Info:\n\n\n* hf\\_name: pol-nor\n* source\\_languages: pol\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'no']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: nor\n* short\\_pair: pl-no\n* chrF2\\_score: 0.479\n* bleu: 27.5\n* brevity\\_penalty: 0.9690000000000001\n* ref\\_len: 2045.0\n* src\\_name: Polish\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: pl\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: pol-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 132, 403 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### pol-nor\n\n\n* source group: Polish\n* target group: Norwegian\n* OPUS readme: pol-nor\n* model: transformer-align\n* source language(s): pol\n* target language(s): nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.479### System Info:\n\n\n* hf\\_name: pol-nor\n* source\\_languages: pol\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'no']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: nor\n* short\\_pair: pl-no\n* chrF2\\_score: 0.479\n* bleu: 27.5\n* brevity\\_penalty: 0.9690000000000001\n* ref\\_len: 2045.0\n* src\\_name: Polish\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: pl\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: pol-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-pl-sv * source languages: pl * target languages: sv * OPUS readme: [pl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-sv/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-sv/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-sv/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.pl.sv | 58.9 | 0.717 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pl-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "pl", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #pl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-pl-sv * source languages: pl * target languages: sv * OPUS readme: pl-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 58.9, chr-F: 0.717
[ "### opus-mt-pl-sv\n\n\n* source languages: pl\n* target languages: sv\n* OPUS readme: pl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 58.9, chr-F: 0.717" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-pl-sv\n\n\n* source languages: pl\n* target languages: sv\n* OPUS readme: pl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 58.9, chr-F: 0.717" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #pl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-pl-sv\n\n\n* source languages: pl\n* target languages: sv\n* OPUS readme: pl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 58.9, chr-F: 0.717" ]
translation
transformers
### pol-ukr * source group: Polish * target group: Ukrainian * OPUS readme: [pol-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-ukr/README.md) * model: transformer-align * source language(s): pol * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ukr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.pol.ukr | 47.1 | 0.665 | ### System Info: - hf_name: pol-ukr - source_languages: pol - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['pl', 'uk'] - src_constituents: {'pol'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-ukr/opus-2020-06-17.test.txt - src_alpha3: pol - tgt_alpha3: ukr - short_pair: pl-uk - chrF2_score: 0.665 - bleu: 47.1 - brevity_penalty: 0.992 - ref_len: 13434.0 - src_name: Polish - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: pl - tgt_alpha2: uk - prefer_old: False - long_pair: pol-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["pl", "uk"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-pl-uk
null
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "pl", "uk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "pl", "uk" ]
TAGS #transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #pl #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### pol-ukr * source group: Polish * target group: Ukrainian * OPUS readme: pol-ukr * model: transformer-align * source language(s): pol * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 47.1, chr-F: 0.665 ### System Info: * hf\_name: pol-ukr * source\_languages: pol * target\_languages: ukr * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['pl', 'uk'] * src\_constituents: {'pol'} * tgt\_constituents: {'ukr'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: pol * tgt\_alpha3: ukr * short\_pair: pl-uk * chrF2\_score: 0.665 * bleu: 47.1 * brevity\_penalty: 0.992 * ref\_len: 13434.0 * src\_name: Polish * tgt\_name: Ukrainian * train\_date: 2020-06-17 * src\_alpha2: pl * tgt\_alpha2: uk * prefer\_old: False * long\_pair: pol-ukr * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### pol-ukr\n\n\n* source group: Polish\n* target group: Ukrainian\n* OPUS readme: pol-ukr\n* model: transformer-align\n* source language(s): pol\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.1, chr-F: 0.665", "### System Info:\n\n\n* hf\\_name: pol-ukr\n* source\\_languages: pol\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'uk']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: ukr\n* short\\_pair: pl-uk\n* chrF2\\_score: 0.665\n* bleu: 47.1\n* brevity\\_penalty: 0.992\n* ref\\_len: 13434.0\n* src\\_name: Polish\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: pl\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: pol-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #pl #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### pol-ukr\n\n\n* source group: Polish\n* target group: Ukrainian\n* OPUS readme: pol-ukr\n* model: transformer-align\n* source language(s): pol\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.1, chr-F: 0.665", "### System Info:\n\n\n* hf\\_name: pol-ukr\n* source\\_languages: pol\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'uk']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: ukr\n* short\\_pair: pl-uk\n* chrF2\\_score: 0.665\n* bleu: 47.1\n* brevity\\_penalty: 0.992\n* ref\\_len: 13434.0\n* src\\_name: Polish\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: pl\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: pol-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 55, 134, 396 ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #pl #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### pol-ukr\n\n\n* source group: Polish\n* target group: Ukrainian\n* OPUS readme: pol-ukr\n* model: transformer-align\n* source language(s): pol\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 47.1, chr-F: 0.665### System Info:\n\n\n* hf\\_name: pol-ukr\n* source\\_languages: pol\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['pl', 'uk']\n* src\\_constituents: {'pol'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: pol\n* tgt\\_alpha3: ukr\n* short\\_pair: pl-uk\n* chrF2\\_score: 0.665\n* bleu: 47.1\n* brevity\\_penalty: 0.992\n* ref\\_len: 13434.0\n* src\\_name: Polish\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: pl\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: pol-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]