pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
tokens_length
sequencelengths
1
723
input_texts
sequencelengths
1
1
translation
transformers
### tha-eng * source group: Thai * target group: English * OPUS readme: [tha-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tha-eng/README.md) * model: transformer-align * source language(s): tha * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tha.eng | 48.1 | 0.644 | ### System Info: - hf_name: tha-eng - source_languages: tha - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tha-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['th', 'en'] - src_constituents: {'tha'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tha-eng/opus-2020-06-17.test.txt - src_alpha3: tha - tgt_alpha3: eng - short_pair: th-en - chrF2_score: 0.644 - bleu: 48.1 - brevity_penalty: 0.9740000000000001 - ref_len: 7407.0 - src_name: Thai - tgt_name: English - train_date: 2020-06-17 - src_alpha2: th - tgt_alpha2: en - prefer_old: False - long_pair: tha-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["th", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-th-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "th", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "th", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #th #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### tha-eng * source group: Thai * target group: English * OPUS readme: tha-eng * model: transformer-align * source language(s): tha * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 48.1, chr-F: 0.644 ### System Info: * hf\_name: tha-eng * source\_languages: tha * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['th', 'en'] * src\_constituents: {'tha'} * tgt\_constituents: {'eng'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: tha * tgt\_alpha3: eng * short\_pair: th-en * chrF2\_score: 0.644 * bleu: 48.1 * brevity\_penalty: 0.9740000000000001 * ref\_len: 7407.0 * src\_name: Thai * tgt\_name: English * train\_date: 2020-06-17 * src\_alpha2: th * tgt\_alpha2: en * prefer\_old: False * long\_pair: tha-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### tha-eng\n\n\n* source group: Thai\n* target group: English\n* OPUS readme: tha-eng\n* model: transformer-align\n* source language(s): tha\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.1, chr-F: 0.644", "### System Info:\n\n\n* hf\\_name: tha-eng\n* source\\_languages: tha\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['th', 'en']\n* src\\_constituents: {'tha'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tha\n* tgt\\_alpha3: eng\n* short\\_pair: th-en\n* chrF2\\_score: 0.644\n* bleu: 48.1\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 7407.0\n* src\\_name: Thai\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: th\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: tha-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #th #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### tha-eng\n\n\n* source group: Thai\n* target group: English\n* OPUS readme: tha-eng\n* model: transformer-align\n* source language(s): tha\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.1, chr-F: 0.644", "### System Info:\n\n\n* hf\\_name: tha-eng\n* source\\_languages: tha\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['th', 'en']\n* src\\_constituents: {'tha'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tha\n* tgt\\_alpha3: eng\n* short\\_pair: th-en\n* chrF2\\_score: 0.644\n* bleu: 48.1\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 7407.0\n* src\\_name: Thai\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: th\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: tha-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 131, 397 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #th #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### tha-eng\n\n\n* source group: Thai\n* target group: English\n* OPUS readme: tha-eng\n* model: transformer-align\n* source language(s): tha\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.1, chr-F: 0.644### System Info:\n\n\n* hf\\_name: tha-eng\n* source\\_languages: tha\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['th', 'en']\n* src\\_constituents: {'tha'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tha\n* tgt\\_alpha3: eng\n* short\\_pair: th-en\n* chrF2\\_score: 0.644\n* bleu: 48.1\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 7407.0\n* src\\_name: Thai\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: th\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: tha-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-th-fr * source languages: th * target languages: fr * OPUS readme: [th-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/th-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/th-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/th-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/th-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.th.fr | 20.4 | 0.363 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-th-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "th", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #th #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-th-fr * source languages: th * target languages: fr * OPUS readme: th-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.4, chr-F: 0.363
[ "### opus-mt-th-fr\n\n\n* source languages: th\n* target languages: fr\n* OPUS readme: th-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.4, chr-F: 0.363" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #th #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-th-fr\n\n\n* source languages: th\n* target languages: fr\n* OPUS readme: th-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.4, chr-F: 0.363" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #th #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-th-fr\n\n\n* source languages: th\n* target languages: fr\n* OPUS readme: th-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.4, chr-F: 0.363" ]
translation
transformers
### opus-mt-ti-en * source languages: ti * target languages: en * OPUS readme: [ti-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ti-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ti-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ti-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ti-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ti.en | 30.4 | 0.461 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ti-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ti", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ti #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ti-en * source languages: ti * target languages: en * OPUS readme: ti-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 30.4, chr-F: 0.461
[ "### opus-mt-ti-en\n\n\n* source languages: ti\n* target languages: en\n* OPUS readme: ti-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.461" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ti #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ti-en\n\n\n* source languages: ti\n* target languages: en\n* OPUS readme: ti-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.461" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ti #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ti-en\n\n\n* source languages: ti\n* target languages: en\n* OPUS readme: ti-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.461" ]
translation
transformers
### opus-mt-tiv-en * source languages: tiv * target languages: en * OPUS readme: [tiv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tiv.en | 31.5 | 0.473 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tiv-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tiv", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tiv #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tiv-en * source languages: tiv * target languages: en * OPUS readme: tiv-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 31.5, chr-F: 0.473
[ "### opus-mt-tiv-en\n\n\n* source languages: tiv\n* target languages: en\n* OPUS readme: tiv-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.5, chr-F: 0.473" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tiv #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tiv-en\n\n\n* source languages: tiv\n* target languages: en\n* OPUS readme: tiv-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.5, chr-F: 0.473" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tiv #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tiv-en\n\n\n* source languages: tiv\n* target languages: en\n* OPUS readme: tiv-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.5, chr-F: 0.473" ]
translation
transformers
### opus-mt-tiv-fr * source languages: tiv * target languages: fr * OPUS readme: [tiv-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tiv.fr | 22.3 | 0.389 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tiv-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tiv", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tiv #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tiv-fr * source languages: tiv * target languages: fr * OPUS readme: tiv-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.3, chr-F: 0.389
[ "### opus-mt-tiv-fr\n\n\n* source languages: tiv\n* target languages: fr\n* OPUS readme: tiv-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.3, chr-F: 0.389" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tiv #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tiv-fr\n\n\n* source languages: tiv\n* target languages: fr\n* OPUS readme: tiv-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.3, chr-F: 0.389" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tiv #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tiv-fr\n\n\n* source languages: tiv\n* target languages: fr\n* OPUS readme: tiv-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.3, chr-F: 0.389" ]
translation
transformers
### opus-mt-tiv-sv * source languages: tiv * target languages: sv * OPUS readme: [tiv-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tiv.sv | 23.7 | 0.416 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tiv-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tiv", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tiv #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tiv-sv * source languages: tiv * target languages: sv * OPUS readme: tiv-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.7, chr-F: 0.416
[ "### opus-mt-tiv-sv\n\n\n* source languages: tiv\n* target languages: sv\n* OPUS readme: tiv-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.416" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tiv #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tiv-sv\n\n\n* source languages: tiv\n* target languages: sv\n* OPUS readme: tiv-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.416" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tiv #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tiv-sv\n\n\n* source languages: tiv\n* target languages: sv\n* OPUS readme: tiv-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.416" ]
translation
transformers
### tgl-deu * source group: Tagalog * target group: German * OPUS readme: [tgl-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-deu/README.md) * model: transformer-align * source language(s): tgl_Latn * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tgl.deu | 22.7 | 0.473 | ### System Info: - hf_name: tgl-deu - source_languages: tgl - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tl', 'de'] - src_constituents: {'tgl_Latn'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.test.txt - src_alpha3: tgl - tgt_alpha3: deu - short_pair: tl-de - chrF2_score: 0.473 - bleu: 22.7 - brevity_penalty: 0.9690000000000001 - ref_len: 2453.0 - src_name: Tagalog - tgt_name: German - train_date: 2020-06-17 - src_alpha2: tl - tgt_alpha2: de - prefer_old: False - long_pair: tgl-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["tl", "de"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tl-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tl", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "tl", "de" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tl #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### tgl-deu * source group: Tagalog * target group: German * OPUS readme: tgl-deu * model: transformer-align * source language(s): tgl\_Latn * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.7, chr-F: 0.473 ### System Info: * hf\_name: tgl-deu * source\_languages: tgl * target\_languages: deu * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['tl', 'de'] * src\_constituents: {'tgl\_Latn'} * tgt\_constituents: {'deu'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: tgl * tgt\_alpha3: deu * short\_pair: tl-de * chrF2\_score: 0.473 * bleu: 22.7 * brevity\_penalty: 0.9690000000000001 * ref\_len: 2453.0 * src\_name: Tagalog * tgt\_name: German * train\_date: 2020-06-17 * src\_alpha2: tl * tgt\_alpha2: de * prefer\_old: False * long\_pair: tgl-deu * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### tgl-deu\n\n\n* source group: Tagalog\n* target group: German\n* OPUS readme: tgl-deu\n* model: transformer-align\n* source language(s): tgl\\_Latn\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.473", "### System Info:\n\n\n* hf\\_name: tgl-deu\n* source\\_languages: tgl\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tl', 'de']\n* src\\_constituents: {'tgl\\_Latn'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tgl\n* tgt\\_alpha3: deu\n* short\\_pair: tl-de\n* chrF2\\_score: 0.473\n* bleu: 22.7\n* brevity\\_penalty: 0.9690000000000001\n* ref\\_len: 2453.0\n* src\\_name: Tagalog\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: tl\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: tgl-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tl #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### tgl-deu\n\n\n* source group: Tagalog\n* target group: German\n* OPUS readme: tgl-deu\n* model: transformer-align\n* source language(s): tgl\\_Latn\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.473", "### System Info:\n\n\n* hf\\_name: tgl-deu\n* source\\_languages: tgl\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tl', 'de']\n* src\\_constituents: {'tgl\\_Latn'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tgl\n* tgt\\_alpha3: deu\n* short\\_pair: tl-de\n* chrF2\\_score: 0.473\n* bleu: 22.7\n* brevity\\_penalty: 0.9690000000000001\n* ref\\_len: 2453.0\n* src\\_name: Tagalog\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: tl\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: tgl-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 52, 144, 417 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tl #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### tgl-deu\n\n\n* source group: Tagalog\n* target group: German\n* OPUS readme: tgl-deu\n* model: transformer-align\n* source language(s): tgl\\_Latn\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.473### System Info:\n\n\n* hf\\_name: tgl-deu\n* source\\_languages: tgl\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tl', 'de']\n* src\\_constituents: {'tgl\\_Latn'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tgl\n* tgt\\_alpha3: deu\n* short\\_pair: tl-de\n* chrF2\\_score: 0.473\n* bleu: 22.7\n* brevity\\_penalty: 0.9690000000000001\n* ref\\_len: 2453.0\n* src\\_name: Tagalog\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: tl\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: tgl-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### tgl-eng * source group: Tagalog * target group: English * OPUS readme: [tgl-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-eng/README.md) * model: transformer-align * source language(s): tgl_Latn * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tgl.eng | 35.0 | 0.542 | ### System Info: - hf_name: tgl-eng - source_languages: tgl - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tl', 'en'] - src_constituents: {'tgl_Latn'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-eng/opus-2020-06-17.test.txt - src_alpha3: tgl - tgt_alpha3: eng - short_pair: tl-en - chrF2_score: 0.542 - bleu: 35.0 - brevity_penalty: 0.975 - ref_len: 18168.0 - src_name: Tagalog - tgt_name: English - train_date: 2020-06-17 - src_alpha2: tl - tgt_alpha2: en - prefer_old: False - long_pair: tgl-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["tl", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tl-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "tl", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### tgl-eng * source group: Tagalog * target group: English * OPUS readme: tgl-eng * model: transformer-align * source language(s): tgl\_Latn * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 35.0, chr-F: 0.542 ### System Info: * hf\_name: tgl-eng * source\_languages: tgl * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['tl', 'en'] * src\_constituents: {'tgl\_Latn'} * tgt\_constituents: {'eng'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: tgl * tgt\_alpha3: eng * short\_pair: tl-en * chrF2\_score: 0.542 * bleu: 35.0 * brevity\_penalty: 0.975 * ref\_len: 18168.0 * src\_name: Tagalog * tgt\_name: English * train\_date: 2020-06-17 * src\_alpha2: tl * tgt\_alpha2: en * prefer\_old: False * long\_pair: tgl-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### tgl-eng\n\n\n* source group: Tagalog\n* target group: English\n* OPUS readme: tgl-eng\n* model: transformer-align\n* source language(s): tgl\\_Latn\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.0, chr-F: 0.542", "### System Info:\n\n\n* hf\\_name: tgl-eng\n* source\\_languages: tgl\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tl', 'en']\n* src\\_constituents: {'tgl\\_Latn'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tgl\n* tgt\\_alpha3: eng\n* short\\_pair: tl-en\n* chrF2\\_score: 0.542\n* bleu: 35.0\n* brevity\\_penalty: 0.975\n* ref\\_len: 18168.0\n* src\\_name: Tagalog\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: tl\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: tgl-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### tgl-eng\n\n\n* source group: Tagalog\n* target group: English\n* OPUS readme: tgl-eng\n* model: transformer-align\n* source language(s): tgl\\_Latn\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.0, chr-F: 0.542", "### System Info:\n\n\n* hf\\_name: tgl-eng\n* source\\_languages: tgl\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tl', 'en']\n* src\\_constituents: {'tgl\\_Latn'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tgl\n* tgt\\_alpha3: eng\n* short\\_pair: tl-en\n* chrF2\\_score: 0.542\n* bleu: 35.0\n* brevity\\_penalty: 0.975\n* ref\\_len: 18168.0\n* src\\_name: Tagalog\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: tl\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: tgl-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 52, 141, 406 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### tgl-eng\n\n\n* source group: Tagalog\n* target group: English\n* OPUS readme: tgl-eng\n* model: transformer-align\n* source language(s): tgl\\_Latn\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.0, chr-F: 0.542### System Info:\n\n\n* hf\\_name: tgl-eng\n* source\\_languages: tgl\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tl', 'en']\n* src\\_constituents: {'tgl\\_Latn'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tgl\n* tgt\\_alpha3: eng\n* short\\_pair: tl-en\n* chrF2\\_score: 0.542\n* bleu: 35.0\n* brevity\\_penalty: 0.975\n* ref\\_len: 18168.0\n* src\\_name: Tagalog\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: tl\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: tgl-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### tgl-spa * source group: Tagalog * target group: Spanish * OPUS readme: [tgl-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-spa/README.md) * model: transformer-align * source language(s): tgl_Latn * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tgl.spa | 31.6 | 0.531 | ### System Info: - hf_name: tgl-spa - source_languages: tgl - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tl', 'es'] - src_constituents: {'tgl_Latn'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-spa/opus-2020-06-17.test.txt - src_alpha3: tgl - tgt_alpha3: spa - short_pair: tl-es - chrF2_score: 0.531 - bleu: 31.6 - brevity_penalty: 0.997 - ref_len: 4327.0 - src_name: Tagalog - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: tl - tgt_alpha2: es - prefer_old: False - long_pair: tgl-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["tl", "es"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tl-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tl", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "tl", "es" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### tgl-spa * source group: Tagalog * target group: Spanish * OPUS readme: tgl-spa * model: transformer-align * source language(s): tgl\_Latn * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 31.6, chr-F: 0.531 ### System Info: * hf\_name: tgl-spa * source\_languages: tgl * target\_languages: spa * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['tl', 'es'] * src\_constituents: {'tgl\_Latn'} * tgt\_constituents: {'spa'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: tgl * tgt\_alpha3: spa * short\_pair: tl-es * chrF2\_score: 0.531 * bleu: 31.6 * brevity\_penalty: 0.997 * ref\_len: 4327.0 * src\_name: Tagalog * tgt\_name: Spanish * train\_date: 2020-06-17 * src\_alpha2: tl * tgt\_alpha2: es * prefer\_old: False * long\_pair: tgl-spa * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### tgl-spa\n\n\n* source group: Tagalog\n* target group: Spanish\n* OPUS readme: tgl-spa\n* model: transformer-align\n* source language(s): tgl\\_Latn\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.531", "### System Info:\n\n\n* hf\\_name: tgl-spa\n* source\\_languages: tgl\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tl', 'es']\n* src\\_constituents: {'tgl\\_Latn'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tgl\n* tgt\\_alpha3: spa\n* short\\_pair: tl-es\n* chrF2\\_score: 0.531\n* bleu: 31.6\n* brevity\\_penalty: 0.997\n* ref\\_len: 4327.0\n* src\\_name: Tagalog\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: tl\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: tgl-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### tgl-spa\n\n\n* source group: Tagalog\n* target group: Spanish\n* OPUS readme: tgl-spa\n* model: transformer-align\n* source language(s): tgl\\_Latn\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.531", "### System Info:\n\n\n* hf\\_name: tgl-spa\n* source\\_languages: tgl\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tl', 'es']\n* src\\_constituents: {'tgl\\_Latn'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tgl\n* tgt\\_alpha3: spa\n* short\\_pair: tl-es\n* chrF2\\_score: 0.531\n* bleu: 31.6\n* brevity\\_penalty: 0.997\n* ref\\_len: 4327.0\n* src\\_name: Tagalog\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: tl\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: tgl-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 52, 141, 406 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### tgl-spa\n\n\n* source group: Tagalog\n* target group: Spanish\n* OPUS readme: tgl-spa\n* model: transformer-align\n* source language(s): tgl\\_Latn\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.531### System Info:\n\n\n* hf\\_name: tgl-spa\n* source\\_languages: tgl\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tl', 'es']\n* src\\_constituents: {'tgl\\_Latn'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tgl\n* tgt\\_alpha3: spa\n* short\\_pair: tl-es\n* chrF2\\_score: 0.531\n* bleu: 31.6\n* brevity\\_penalty: 0.997\n* ref\\_len: 4327.0\n* src\\_name: Tagalog\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: tl\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: tgl-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### tgl-por * source group: Tagalog * target group: Portuguese * OPUS readme: [tgl-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-por/README.md) * model: transformer-align * source language(s): tgl_Latn * target language(s): por * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tgl.por | 28.8 | 0.522 | ### System Info: - hf_name: tgl-por - source_languages: tgl - target_languages: por - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-por/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tl', 'pt'] - src_constituents: {'tgl_Latn'} - tgt_constituents: {'por'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-por/opus-2020-06-17.test.txt - src_alpha3: tgl - tgt_alpha3: por - short_pair: tl-pt - chrF2_score: 0.522 - bleu: 28.8 - brevity_penalty: 0.981 - ref_len: 12826.0 - src_name: Tagalog - tgt_name: Portuguese - train_date: 2020-06-17 - src_alpha2: tl - tgt_alpha2: pt - prefer_old: False - long_pair: tgl-por - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["tl", "pt"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tl-pt
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tl", "pt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "tl", "pt" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tl #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### tgl-por * source group: Tagalog * target group: Portuguese * OPUS readme: tgl-por * model: transformer-align * source language(s): tgl\_Latn * target language(s): por * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 28.8, chr-F: 0.522 ### System Info: * hf\_name: tgl-por * source\_languages: tgl * target\_languages: por * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['tl', 'pt'] * src\_constituents: {'tgl\_Latn'} * tgt\_constituents: {'por'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: tgl * tgt\_alpha3: por * short\_pair: tl-pt * chrF2\_score: 0.522 * bleu: 28.8 * brevity\_penalty: 0.981 * ref\_len: 12826.0 * src\_name: Tagalog * tgt\_name: Portuguese * train\_date: 2020-06-17 * src\_alpha2: tl * tgt\_alpha2: pt * prefer\_old: False * long\_pair: tgl-por * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### tgl-por\n\n\n* source group: Tagalog\n* target group: Portuguese\n* OPUS readme: tgl-por\n* model: transformer-align\n* source language(s): tgl\\_Latn\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.8, chr-F: 0.522", "### System Info:\n\n\n* hf\\_name: tgl-por\n* source\\_languages: tgl\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tl', 'pt']\n* src\\_constituents: {'tgl\\_Latn'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tgl\n* tgt\\_alpha3: por\n* short\\_pair: tl-pt\n* chrF2\\_score: 0.522\n* bleu: 28.8\n* brevity\\_penalty: 0.981\n* ref\\_len: 12826.0\n* src\\_name: Tagalog\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-17\n* src\\_alpha2: tl\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: tgl-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tl #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### tgl-por\n\n\n* source group: Tagalog\n* target group: Portuguese\n* OPUS readme: tgl-por\n* model: transformer-align\n* source language(s): tgl\\_Latn\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.8, chr-F: 0.522", "### System Info:\n\n\n* hf\\_name: tgl-por\n* source\\_languages: tgl\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tl', 'pt']\n* src\\_constituents: {'tgl\\_Latn'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tgl\n* tgt\\_alpha3: por\n* short\\_pair: tl-pt\n* chrF2\\_score: 0.522\n* bleu: 28.8\n* brevity\\_penalty: 0.981\n* ref\\_len: 12826.0\n* src\\_name: Tagalog\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-17\n* src\\_alpha2: tl\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: tgl-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 52, 141, 406 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tl #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### tgl-por\n\n\n* source group: Tagalog\n* target group: Portuguese\n* OPUS readme: tgl-por\n* model: transformer-align\n* source language(s): tgl\\_Latn\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.8, chr-F: 0.522### System Info:\n\n\n* hf\\_name: tgl-por\n* source\\_languages: tgl\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tl', 'pt']\n* src\\_constituents: {'tgl\\_Latn'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tgl\n* tgt\\_alpha3: por\n* short\\_pair: tl-pt\n* chrF2\\_score: 0.522\n* bleu: 28.8\n* brevity\\_penalty: 0.981\n* ref\\_len: 12826.0\n* src\\_name: Tagalog\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-17\n* src\\_alpha2: tl\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: tgl-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-tll-en * source languages: tll * target languages: en * OPUS readme: [tll-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tll.en | 34.5 | 0.500 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tll-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tll", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tll #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tll-en * source languages: tll * target languages: en * OPUS readme: tll-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 34.5, chr-F: 0.500
[ "### opus-mt-tll-en\n\n\n* source languages: tll\n* target languages: en\n* OPUS readme: tll-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.5, chr-F: 0.500" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tll #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tll-en\n\n\n* source languages: tll\n* target languages: en\n* OPUS readme: tll-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.5, chr-F: 0.500" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tll #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tll-en\n\n\n* source languages: tll\n* target languages: en\n* OPUS readme: tll-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.5, chr-F: 0.500" ]
translation
transformers
### opus-mt-tll-es * source languages: tll * target languages: es * OPUS readme: [tll-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tll.es | 22.9 | 0.403 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tll-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tll", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tll #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tll-es * source languages: tll * target languages: es * OPUS readme: tll-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.9, chr-F: 0.403
[ "### opus-mt-tll-es\n\n\n* source languages: tll\n* target languages: es\n* OPUS readme: tll-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.9, chr-F: 0.403" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tll #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tll-es\n\n\n* source languages: tll\n* target languages: es\n* OPUS readme: tll-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.9, chr-F: 0.403" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tll #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tll-es\n\n\n* source languages: tll\n* target languages: es\n* OPUS readme: tll-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.9, chr-F: 0.403" ]
translation
transformers
### opus-mt-tll-fi * source languages: tll * target languages: fi * OPUS readme: [tll-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tll.fi | 22.4 | 0.441 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tll-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tll", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tll #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tll-fi * source languages: tll * target languages: fi * OPUS readme: tll-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.4, chr-F: 0.441
[ "### opus-mt-tll-fi\n\n\n* source languages: tll\n* target languages: fi\n* OPUS readme: tll-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.441" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tll #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tll-fi\n\n\n* source languages: tll\n* target languages: fi\n* OPUS readme: tll-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.441" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tll #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tll-fi\n\n\n* source languages: tll\n* target languages: fi\n* OPUS readme: tll-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.441" ]
translation
transformers
### opus-mt-tll-fr * source languages: tll * target languages: fr * OPUS readme: [tll-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tll.fr | 25.2 | 0.426 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tll-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tll", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tll #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tll-fr * source languages: tll * target languages: fr * OPUS readme: tll-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.2, chr-F: 0.426
[ "### opus-mt-tll-fr\n\n\n* source languages: tll\n* target languages: fr\n* OPUS readme: tll-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.2, chr-F: 0.426" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tll #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tll-fr\n\n\n* source languages: tll\n* target languages: fr\n* OPUS readme: tll-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.2, chr-F: 0.426" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tll #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tll-fr\n\n\n* source languages: tll\n* target languages: fr\n* OPUS readme: tll-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.2, chr-F: 0.426" ]
translation
transformers
### opus-mt-tll-sv * source languages: tll * target languages: sv * OPUS readme: [tll-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tll.sv | 25.6 | 0.436 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tll-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tll", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tll #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tll-sv * source languages: tll * target languages: sv * OPUS readme: tll-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.6, chr-F: 0.436
[ "### opus-mt-tll-sv\n\n\n* source languages: tll\n* target languages: sv\n* OPUS readme: tll-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.436" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tll #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tll-sv\n\n\n* source languages: tll\n* target languages: sv\n* OPUS readme: tll-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.436" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tll #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tll-sv\n\n\n* source languages: tll\n* target languages: sv\n* OPUS readme: tll-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.436" ]
translation
transformers
### opus-mt-tn-en * source languages: tn * target languages: en * OPUS readme: [tn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-en/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-en/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-en/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tn.en | 43.4 | 0.589 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tn-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tn", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tn-en * source languages: tn * target languages: en * OPUS readme: tn-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 43.4, chr-F: 0.589
[ "### opus-mt-tn-en\n\n\n* source languages: tn\n* target languages: en\n* OPUS readme: tn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.4, chr-F: 0.589" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tn-en\n\n\n* source languages: tn\n* target languages: en\n* OPUS readme: tn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.4, chr-F: 0.589" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tn-en\n\n\n* source languages: tn\n* target languages: en\n* OPUS readme: tn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.4, chr-F: 0.589" ]
translation
transformers
### opus-mt-tn-es * source languages: tn * target languages: es * OPUS readme: [tn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tn.es | 29.1 | 0.479 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tn-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tn", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tn-es * source languages: tn * target languages: es * OPUS readme: tn-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 29.1, chr-F: 0.479
[ "### opus-mt-tn-es\n\n\n* source languages: tn\n* target languages: es\n* OPUS readme: tn-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.479" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tn-es\n\n\n* source languages: tn\n* target languages: es\n* OPUS readme: tn-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.479" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tn-es\n\n\n* source languages: tn\n* target languages: es\n* OPUS readme: tn-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.479" ]
translation
transformers
### opus-mt-tn-fr * source languages: tn * target languages: fr * OPUS readme: [tn-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tn.fr | 29.0 | 0.474 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tn-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tn", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tn-fr * source languages: tn * target languages: fr * OPUS readme: tn-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 29.0, chr-F: 0.474
[ "### opus-mt-tn-fr\n\n\n* source languages: tn\n* target languages: fr\n* OPUS readme: tn-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.0, chr-F: 0.474" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tn-fr\n\n\n* source languages: tn\n* target languages: fr\n* OPUS readme: tn-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.0, chr-F: 0.474" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tn-fr\n\n\n* source languages: tn\n* target languages: fr\n* OPUS readme: tn-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.0, chr-F: 0.474" ]
translation
transformers
### opus-mt-tn-sv * source languages: tn * target languages: sv * OPUS readme: [tn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tn.sv | 32.0 | 0.508 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tn-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tn", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tn #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tn-sv * source languages: tn * target languages: sv * OPUS readme: tn-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 32.0, chr-F: 0.508
[ "### opus-mt-tn-sv\n\n\n* source languages: tn\n* target languages: sv\n* OPUS readme: tn-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.0, chr-F: 0.508" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tn #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tn-sv\n\n\n* source languages: tn\n* target languages: sv\n* OPUS readme: tn-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.0, chr-F: 0.508" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tn #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tn-sv\n\n\n* source languages: tn\n* target languages: sv\n* OPUS readme: tn-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.0, chr-F: 0.508" ]
translation
transformers
### opus-mt-to-en * source languages: to * target languages: en * OPUS readme: [to-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.to.en | 49.3 | 0.627 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-to-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "to", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #to #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-to-en * source languages: to * target languages: en * OPUS readme: to-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 49.3, chr-F: 0.627
[ "### opus-mt-to-en\n\n\n* source languages: to\n* target languages: en\n* OPUS readme: to-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.3, chr-F: 0.627" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #to #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-to-en\n\n\n* source languages: to\n* target languages: en\n* OPUS readme: to-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.3, chr-F: 0.627" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #to #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-to-en\n\n\n* source languages: to\n* target languages: en\n* OPUS readme: to-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.3, chr-F: 0.627" ]
translation
transformers
### opus-mt-to-es * source languages: to * target languages: es * OPUS readme: [to-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.to.es | 26.6 | 0.447 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-to-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "to", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #to #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-to-es * source languages: to * target languages: es * OPUS readme: to-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.6, chr-F: 0.447
[ "### opus-mt-to-es\n\n\n* source languages: to\n* target languages: es\n* OPUS readme: to-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.6, chr-F: 0.447" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #to #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-to-es\n\n\n* source languages: to\n* target languages: es\n* OPUS readme: to-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.6, chr-F: 0.447" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #to #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-to-es\n\n\n* source languages: to\n* target languages: es\n* OPUS readme: to-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.6, chr-F: 0.447" ]
translation
transformers
### opus-mt-to-fr * source languages: to * target languages: fr * OPUS readme: [to-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.to.fr | 27.9 | 0.456 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-to-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "to", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #to #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-to-fr * source languages: to * target languages: fr * OPUS readme: to-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.9, chr-F: 0.456
[ "### opus-mt-to-fr\n\n\n* source languages: to\n* target languages: fr\n* OPUS readme: to-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.456" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #to #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-to-fr\n\n\n* source languages: to\n* target languages: fr\n* OPUS readme: to-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.456" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #to #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-to-fr\n\n\n* source languages: to\n* target languages: fr\n* OPUS readme: to-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.9, chr-F: 0.456" ]
translation
transformers
### opus-mt-to-sv * source languages: to * target languages: sv * OPUS readme: [to-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/to-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/to-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/to-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.to.sv | 30.7 | 0.493 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-to-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "to", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #to #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-to-sv * source languages: to * target languages: sv * OPUS readme: to-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 30.7, chr-F: 0.493
[ "### opus-mt-to-sv\n\n\n* source languages: to\n* target languages: sv\n* OPUS readme: to-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.493" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #to #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-to-sv\n\n\n* source languages: to\n* target languages: sv\n* OPUS readme: to-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.493" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #to #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-to-sv\n\n\n* source languages: to\n* target languages: sv\n* OPUS readme: to-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.7, chr-F: 0.493" ]
translation
transformers
### opus-mt-toi-en * source languages: toi * target languages: en * OPUS readme: [toi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.toi.en | 39.0 | 0.539 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-toi-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "toi", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #toi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-toi-en * source languages: toi * target languages: en * OPUS readme: toi-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 39.0, chr-F: 0.539
[ "### opus-mt-toi-en\n\n\n* source languages: toi\n* target languages: en\n* OPUS readme: toi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.0, chr-F: 0.539" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #toi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-toi-en\n\n\n* source languages: toi\n* target languages: en\n* OPUS readme: toi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.0, chr-F: 0.539" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #toi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-toi-en\n\n\n* source languages: toi\n* target languages: en\n* OPUS readme: toi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.0, chr-F: 0.539" ]
translation
transformers
### opus-mt-toi-es * source languages: toi * target languages: es * OPUS readme: [toi-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.toi.es | 24.6 | 0.416 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-toi-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "toi", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #toi #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-toi-es * source languages: toi * target languages: es * OPUS readme: toi-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.6, chr-F: 0.416
[ "### opus-mt-toi-es\n\n\n* source languages: toi\n* target languages: es\n* OPUS readme: toi-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.6, chr-F: 0.416" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #toi #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-toi-es\n\n\n* source languages: toi\n* target languages: es\n* OPUS readme: toi-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.6, chr-F: 0.416" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #toi #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-toi-es\n\n\n* source languages: toi\n* target languages: es\n* OPUS readme: toi-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.6, chr-F: 0.416" ]
translation
transformers
### opus-mt-toi-fi * source languages: toi * target languages: fi * OPUS readme: [toi-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.toi.fi | 24.5 | 0.464 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-toi-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "toi", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #toi #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-toi-fi * source languages: toi * target languages: fi * OPUS readme: toi-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.5, chr-F: 0.464
[ "### opus-mt-toi-fi\n\n\n* source languages: toi\n* target languages: fi\n* OPUS readme: toi-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.5, chr-F: 0.464" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #toi #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-toi-fi\n\n\n* source languages: toi\n* target languages: fi\n* OPUS readme: toi-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.5, chr-F: 0.464" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #toi #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-toi-fi\n\n\n* source languages: toi\n* target languages: fi\n* OPUS readme: toi-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.5, chr-F: 0.464" ]
translation
transformers
### opus-mt-toi-fr * source languages: toi * target languages: fr * OPUS readme: [toi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.toi.fr | 26.5 | 0.432 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-toi-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "toi", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #toi #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-toi-fr * source languages: toi * target languages: fr * OPUS readme: toi-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.5, chr-F: 0.432
[ "### opus-mt-toi-fr\n\n\n* source languages: toi\n* target languages: fr\n* OPUS readme: toi-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.5, chr-F: 0.432" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #toi #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-toi-fr\n\n\n* source languages: toi\n* target languages: fr\n* OPUS readme: toi-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.5, chr-F: 0.432" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #toi #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-toi-fr\n\n\n* source languages: toi\n* target languages: fr\n* OPUS readme: toi-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.5, chr-F: 0.432" ]
translation
transformers
### opus-mt-toi-sv * source languages: toi * target languages: sv * OPUS readme: [toi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.toi.sv | 27.0 | 0.448 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-toi-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "toi", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #toi #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-toi-sv * source languages: toi * target languages: sv * OPUS readme: toi-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.0, chr-F: 0.448
[ "### opus-mt-toi-sv\n\n\n* source languages: toi\n* target languages: sv\n* OPUS readme: toi-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.0, chr-F: 0.448" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #toi #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-toi-sv\n\n\n* source languages: toi\n* target languages: sv\n* OPUS readme: toi-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.0, chr-F: 0.448" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #toi #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-toi-sv\n\n\n* source languages: toi\n* target languages: sv\n* OPUS readme: toi-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.0, chr-F: 0.448" ]
translation
transformers
### opus-mt-tpi-en * source languages: tpi * target languages: en * OPUS readme: [tpi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tpi-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tpi.en | 29.1 | 0.448 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tpi-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tpi", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tpi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tpi-en * source languages: tpi * target languages: en * OPUS readme: tpi-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 29.1, chr-F: 0.448
[ "### opus-mt-tpi-en\n\n\n* source languages: tpi\n* target languages: en\n* OPUS readme: tpi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.448" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tpi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tpi-en\n\n\n* source languages: tpi\n* target languages: en\n* OPUS readme: tpi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.448" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tpi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tpi-en\n\n\n* source languages: tpi\n* target languages: en\n* OPUS readme: tpi-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.448" ]
translation
transformers
### opus-mt-tpi-sv * source languages: tpi * target languages: sv * OPUS readme: [tpi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tpi-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tpi-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tpi.sv | 21.6 | 0.396 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tpi-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tpi", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tpi #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tpi-sv * source languages: tpi * target languages: sv * OPUS readme: tpi-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 21.6, chr-F: 0.396
[ "### opus-mt-tpi-sv\n\n\n* source languages: tpi\n* target languages: sv\n* OPUS readme: tpi-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.6, chr-F: 0.396" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tpi #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tpi-sv\n\n\n* source languages: tpi\n* target languages: sv\n* OPUS readme: tpi-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.6, chr-F: 0.396" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tpi #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tpi-sv\n\n\n* source languages: tpi\n* target languages: sv\n* OPUS readme: tpi-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.6, chr-F: 0.396" ]
translation
transformers
### tur-ara * source group: Turkish * target group: Arabic * OPUS readme: [tur-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ara/README.md) * model: transformer * source language(s): tur * target language(s): apc_Latn ara ara_Latn arq_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tur.ara | 14.9 | 0.455 | ### System Info: - hf_name: tur-ara - source_languages: tur - target_languages: ara - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ara/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'ar'] - src_constituents: {'tur'} - tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ara/opus-2020-07-03.test.txt - src_alpha3: tur - tgt_alpha3: ara - short_pair: tr-ar - chrF2_score: 0.455 - bleu: 14.9 - brevity_penalty: 0.988 - ref_len: 6944.0 - src_name: Turkish - tgt_name: Arabic - train_date: 2020-07-03 - src_alpha2: tr - tgt_alpha2: ar - prefer_old: False - long_pair: tur-ara - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["tr", "ar"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tr-ar
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tr", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "tr", "ar" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tr #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### tur-ara * source group: Turkish * target group: Arabic * OPUS readme: tur-ara * model: transformer * source language(s): tur * target language(s): apc\_Latn ara ara\_Latn arq\_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 14.9, chr-F: 0.455 ### System Info: * hf\_name: tur-ara * source\_languages: tur * target\_languages: ara * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['tr', 'ar'] * src\_constituents: {'tur'} * tgt\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: tur * tgt\_alpha3: ara * short\_pair: tr-ar * chrF2\_score: 0.455 * bleu: 14.9 * brevity\_penalty: 0.988 * ref\_len: 6944.0 * src\_name: Turkish * tgt\_name: Arabic * train\_date: 2020-07-03 * src\_alpha2: tr * tgt\_alpha2: ar * prefer\_old: False * long\_pair: tur-ara * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### tur-ara\n\n\n* source group: Turkish\n* target group: Arabic\n* OPUS readme: tur-ara\n* model: transformer\n* source language(s): tur\n* target language(s): apc\\_Latn ara ara\\_Latn arq\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 14.9, chr-F: 0.455", "### System Info:\n\n\n* hf\\_name: tur-ara\n* source\\_languages: tur\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'ar']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: ara\n* short\\_pair: tr-ar\n* chrF2\\_score: 0.455\n* bleu: 14.9\n* brevity\\_penalty: 0.988\n* ref\\_len: 6944.0\n* src\\_name: Turkish\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: tr\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: tur-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### tur-ara\n\n\n* source group: Turkish\n* target group: Arabic\n* OPUS readme: tur-ara\n* model: transformer\n* source language(s): tur\n* target language(s): apc\\_Latn ara ara\\_Latn arq\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 14.9, chr-F: 0.455", "### System Info:\n\n\n* hf\\_name: tur-ara\n* source\\_languages: tur\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'ar']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: ara\n* short\\_pair: tr-ar\n* chrF2\\_score: 0.455\n* bleu: 14.9\n* brevity\\_penalty: 0.988\n* ref\\_len: 6944.0\n* src\\_name: Turkish\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: tr\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: tur-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 177, 444 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### tur-ara\n\n\n* source group: Turkish\n* target group: Arabic\n* OPUS readme: tur-ara\n* model: transformer\n* source language(s): tur\n* target language(s): apc\\_Latn ara ara\\_Latn arq\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 14.9, chr-F: 0.455### System Info:\n\n\n* hf\\_name: tur-ara\n* source\\_languages: tur\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'ar']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: ara\n* short\\_pair: tr-ar\n* chrF2\\_score: 0.455\n* bleu: 14.9\n* brevity\\_penalty: 0.988\n* ref\\_len: 6944.0\n* src\\_name: Turkish\n* tgt\\_name: Arabic\n* train\\_date: 2020-07-03\n* src\\_alpha2: tr\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: tur-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### tur-aze * source group: Turkish * target group: Azerbaijani * OPUS readme: [tur-aze](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-aze/README.md) * model: transformer-align * source language(s): tur * target language(s): aze_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tur.aze | 27.7 | 0.551 | ### System Info: - hf_name: tur-aze - source_languages: tur - target_languages: aze - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-aze/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'az'] - src_constituents: {'tur'} - tgt_constituents: {'aze_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.test.txt - src_alpha3: tur - tgt_alpha3: aze - short_pair: tr-az - chrF2_score: 0.551 - bleu: 27.7 - brevity_penalty: 1.0 - ref_len: 5436.0 - src_name: Turkish - tgt_name: Azerbaijani - train_date: 2020-06-16 - src_alpha2: tr - tgt_alpha2: az - prefer_old: False - long_pair: tur-aze - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["tr", "az"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tr-az
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tr", "az", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "tr", "az" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tr #az #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### tur-aze * source group: Turkish * target group: Azerbaijani * OPUS readme: tur-aze * model: transformer-align * source language(s): tur * target language(s): aze\_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.7, chr-F: 0.551 ### System Info: * hf\_name: tur-aze * source\_languages: tur * target\_languages: aze * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['tr', 'az'] * src\_constituents: {'tur'} * tgt\_constituents: {'aze\_Latn'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: tur * tgt\_alpha3: aze * short\_pair: tr-az * chrF2\_score: 0.551 * bleu: 27.7 * brevity\_penalty: 1.0 * ref\_len: 5436.0 * src\_name: Turkish * tgt\_name: Azerbaijani * train\_date: 2020-06-16 * src\_alpha2: tr * tgt\_alpha2: az * prefer\_old: False * long\_pair: tur-aze * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### tur-aze\n\n\n* source group: Turkish\n* target group: Azerbaijani\n* OPUS readme: tur-aze\n* model: transformer-align\n* source language(s): tur\n* target language(s): aze\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.551", "### System Info:\n\n\n* hf\\_name: tur-aze\n* source\\_languages: tur\n* target\\_languages: aze\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'az']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'aze\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: aze\n* short\\_pair: tr-az\n* chrF2\\_score: 0.551\n* bleu: 27.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 5436.0\n* src\\_name: Turkish\n* tgt\\_name: Azerbaijani\n* train\\_date: 2020-06-16\n* src\\_alpha2: tr\n* tgt\\_alpha2: az\n* prefer\\_old: False\n* long\\_pair: tur-aze\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #az #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### tur-aze\n\n\n* source group: Turkish\n* target group: Azerbaijani\n* OPUS readme: tur-aze\n* model: transformer-align\n* source language(s): tur\n* target language(s): aze\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.551", "### System Info:\n\n\n* hf\\_name: tur-aze\n* source\\_languages: tur\n* target\\_languages: aze\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'az']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'aze\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: aze\n* short\\_pair: tr-az\n* chrF2\\_score: 0.551\n* bleu: 27.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 5436.0\n* src\\_name: Turkish\n* tgt\\_name: Azerbaijani\n* train\\_date: 2020-06-16\n* src\\_alpha2: tr\n* tgt\\_alpha2: az\n* prefer\\_old: False\n* long\\_pair: tur-aze\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 142, 405 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #az #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### tur-aze\n\n\n* source group: Turkish\n* target group: Azerbaijani\n* OPUS readme: tur-aze\n* model: transformer-align\n* source language(s): tur\n* target language(s): aze\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.551### System Info:\n\n\n* hf\\_name: tur-aze\n* source\\_languages: tur\n* target\\_languages: aze\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'az']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'aze\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: aze\n* short\\_pair: tr-az\n* chrF2\\_score: 0.551\n* bleu: 27.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 5436.0\n* src\\_name: Turkish\n* tgt\\_name: Azerbaijani\n* train\\_date: 2020-06-16\n* src\\_alpha2: tr\n* tgt\\_alpha2: az\n* prefer\\_old: False\n* long\\_pair: tur-aze\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-tr-en * source languages: tr * target languages: en * OPUS readme: [tr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-entr.tr.en | 27.6 | 0.548 | | newstest2016-entr.tr.en | 25.2 | 0.532 | | newstest2017-entr.tr.en | 24.7 | 0.530 | | newstest2018-entr.tr.en | 27.0 | 0.547 | | Tatoeba.tr.en | 63.5 | 0.760 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tr-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tr", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tr #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tr-en * source languages: tr * target languages: en * OPUS readme: tr-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.6, chr-F: 0.548 testset: URL, BLEU: 25.2, chr-F: 0.532 testset: URL, BLEU: 24.7, chr-F: 0.530 testset: URL, BLEU: 27.0, chr-F: 0.547 testset: URL, BLEU: 63.5, chr-F: 0.760
[ "### opus-mt-tr-en\n\n\n* source languages: tr\n* target languages: en\n* OPUS readme: tr-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.548\ntestset: URL, BLEU: 25.2, chr-F: 0.532\ntestset: URL, BLEU: 24.7, chr-F: 0.530\ntestset: URL, BLEU: 27.0, chr-F: 0.547\ntestset: URL, BLEU: 63.5, chr-F: 0.760" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tr-en\n\n\n* source languages: tr\n* target languages: en\n* OPUS readme: tr-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.548\ntestset: URL, BLEU: 25.2, chr-F: 0.532\ntestset: URL, BLEU: 24.7, chr-F: 0.530\ntestset: URL, BLEU: 27.0, chr-F: 0.547\ntestset: URL, BLEU: 63.5, chr-F: 0.760" ]
[ 51, 196 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tr-en\n\n\n* source languages: tr\n* target languages: en\n* OPUS readme: tr-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.548\ntestset: URL, BLEU: 25.2, chr-F: 0.532\ntestset: URL, BLEU: 24.7, chr-F: 0.530\ntestset: URL, BLEU: 27.0, chr-F: 0.547\ntestset: URL, BLEU: 63.5, chr-F: 0.760" ]
translation
transformers
### tur-epo * source group: Turkish * target group: Esperanto * OPUS readme: [tur-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-epo/README.md) * model: transformer-align * source language(s): tur * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tur.epo | 17.0 | 0.373 | ### System Info: - hf_name: tur-epo - source_languages: tur - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'eo'] - src_constituents: {'tur'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-epo/opus-2020-06-16.test.txt - src_alpha3: tur - tgt_alpha3: epo - short_pair: tr-eo - chrF2_score: 0.373 - bleu: 17.0 - brevity_penalty: 0.8809999999999999 - ref_len: 33762.0 - src_name: Turkish - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: tr - tgt_alpha2: eo - prefer_old: False - long_pair: tur-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["tr", "eo"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tr-eo
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tr", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "tr", "eo" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tr #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### tur-epo * source group: Turkish * target group: Esperanto * OPUS readme: tur-epo * model: transformer-align * source language(s): tur * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 17.0, chr-F: 0.373 ### System Info: * hf\_name: tur-epo * source\_languages: tur * target\_languages: epo * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['tr', 'eo'] * src\_constituents: {'tur'} * tgt\_constituents: {'epo'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: tur * tgt\_alpha3: epo * short\_pair: tr-eo * chrF2\_score: 0.373 * bleu: 17.0 * brevity\_penalty: 0.8809999999999999 * ref\_len: 33762.0 * src\_name: Turkish * tgt\_name: Esperanto * train\_date: 2020-06-16 * src\_alpha2: tr * tgt\_alpha2: eo * prefer\_old: False * long\_pair: tur-epo * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### tur-epo\n\n\n* source group: Turkish\n* target group: Esperanto\n* OPUS readme: tur-epo\n* model: transformer-align\n* source language(s): tur\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.0, chr-F: 0.373", "### System Info:\n\n\n* hf\\_name: tur-epo\n* source\\_languages: tur\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'eo']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: epo\n* short\\_pair: tr-eo\n* chrF2\\_score: 0.373\n* bleu: 17.0\n* brevity\\_penalty: 0.8809999999999999\n* ref\\_len: 33762.0\n* src\\_name: Turkish\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: tr\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: tur-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### tur-epo\n\n\n* source group: Turkish\n* target group: Esperanto\n* OPUS readme: tur-epo\n* model: transformer-align\n* source language(s): tur\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.0, chr-F: 0.373", "### System Info:\n\n\n* hf\\_name: tur-epo\n* source\\_languages: tur\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'eo']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: epo\n* short\\_pair: tr-eo\n* chrF2\\_score: 0.373\n* bleu: 17.0\n* brevity\\_penalty: 0.8809999999999999\n* ref\\_len: 33762.0\n* src\\_name: Turkish\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: tr\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: tur-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 52, 139, 419 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### tur-epo\n\n\n* source group: Turkish\n* target group: Esperanto\n* OPUS readme: tur-epo\n* model: transformer-align\n* source language(s): tur\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 17.0, chr-F: 0.373### System Info:\n\n\n* hf\\_name: tur-epo\n* source\\_languages: tur\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'eo']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: epo\n* short\\_pair: tr-eo\n* chrF2\\_score: 0.373\n* bleu: 17.0\n* brevity\\_penalty: 0.8809999999999999\n* ref\\_len: 33762.0\n* src\\_name: Turkish\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: tr\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: tur-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-tr-es * source languages: tr * target languages: es * OPUS readme: [tr-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.zip) * test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.test.txt) * test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.tr.es | 56.3 | 0.722 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tr-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tr", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tr #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tr-es * source languages: tr * target languages: es * OPUS readme: tr-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 56.3, chr-F: 0.722
[ "### opus-mt-tr-es\n\n\n* source languages: tr\n* target languages: es\n* OPUS readme: tr-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.3, chr-F: 0.722" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tr-es\n\n\n* source languages: tr\n* target languages: es\n* OPUS readme: tr-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.3, chr-F: 0.722" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tr-es\n\n\n* source languages: tr\n* target languages: es\n* OPUS readme: tr-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.3, chr-F: 0.722" ]
translation
transformers
### opus-mt-tr-fr * source languages: tr * target languages: fr * OPUS readme: [tr-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.tr.fr | 45.3 | 0.627 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tr-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tr", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tr #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tr-fr * source languages: tr * target languages: fr * OPUS readme: tr-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 45.3, chr-F: 0.627
[ "### opus-mt-tr-fr\n\n\n* source languages: tr\n* target languages: fr\n* OPUS readme: tr-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.3, chr-F: 0.627" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tr-fr\n\n\n* source languages: tr\n* target languages: fr\n* OPUS readme: tr-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.3, chr-F: 0.627" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tr-fr\n\n\n* source languages: tr\n* target languages: fr\n* OPUS readme: tr-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.3, chr-F: 0.627" ]
translation
transformers
### tur-lit * source group: Turkish * target group: Lithuanian * OPUS readme: [tur-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-lit/README.md) * model: transformer-align * source language(s): tur * target language(s): lit * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tur.lit | 35.6 | 0.631 | ### System Info: - hf_name: tur-lit - source_languages: tur - target_languages: lit - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-lit/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'lt'] - src_constituents: {'tur'} - tgt_constituents: {'lit'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-lit/opus-2020-06-17.test.txt - src_alpha3: tur - tgt_alpha3: lit - short_pair: tr-lt - chrF2_score: 0.631 - bleu: 35.6 - brevity_penalty: 0.9490000000000001 - ref_len: 8285.0 - src_name: Turkish - tgt_name: Lithuanian - train_date: 2020-06-17 - src_alpha2: tr - tgt_alpha2: lt - prefer_old: False - long_pair: tur-lit - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["tr", "lt"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tr-lt
null
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "tr", "lt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "tr", "lt" ]
TAGS #transformers #pytorch #marian #text2text-generation #translation #tr #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### tur-lit * source group: Turkish * target group: Lithuanian * OPUS readme: tur-lit * model: transformer-align * source language(s): tur * target language(s): lit * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 35.6, chr-F: 0.631 ### System Info: * hf\_name: tur-lit * source\_languages: tur * target\_languages: lit * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['tr', 'lt'] * src\_constituents: {'tur'} * tgt\_constituents: {'lit'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: tur * tgt\_alpha3: lit * short\_pair: tr-lt * chrF2\_score: 0.631 * bleu: 35.6 * brevity\_penalty: 0.9490000000000001 * ref\_len: 8285.0 * src\_name: Turkish * tgt\_name: Lithuanian * train\_date: 2020-06-17 * src\_alpha2: tr * tgt\_alpha2: lt * prefer\_old: False * long\_pair: tur-lit * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### tur-lit\n\n\n* source group: Turkish\n* target group: Lithuanian\n* OPUS readme: tur-lit\n* model: transformer-align\n* source language(s): tur\n* target language(s): lit\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.6, chr-F: 0.631", "### System Info:\n\n\n* hf\\_name: tur-lit\n* source\\_languages: tur\n* target\\_languages: lit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'lt']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'lit'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: lit\n* short\\_pair: tr-lt\n* chrF2\\_score: 0.631\n* bleu: 35.6\n* brevity\\_penalty: 0.9490000000000001\n* ref\\_len: 8285.0\n* src\\_name: Turkish\n* tgt\\_name: Lithuanian\n* train\\_date: 2020-06-17\n* src\\_alpha2: tr\n* tgt\\_alpha2: lt\n* prefer\\_old: False\n* long\\_pair: tur-lit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #marian #text2text-generation #translation #tr #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### tur-lit\n\n\n* source group: Turkish\n* target group: Lithuanian\n* OPUS readme: tur-lit\n* model: transformer-align\n* source language(s): tur\n* target language(s): lit\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.6, chr-F: 0.631", "### System Info:\n\n\n* hf\\_name: tur-lit\n* source\\_languages: tur\n* target\\_languages: lit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'lt']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'lit'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: lit\n* short\\_pair: tr-lt\n* chrF2\\_score: 0.631\n* bleu: 35.6\n* brevity\\_penalty: 0.9490000000000001\n* ref\\_len: 8285.0\n* src\\_name: Turkish\n* tgt\\_name: Lithuanian\n* train\\_date: 2020-06-17\n* src\\_alpha2: tr\n* tgt\\_alpha2: lt\n* prefer\\_old: False\n* long\\_pair: tur-lit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 48, 134, 402 ]
[ "TAGS\n#transformers #pytorch #marian #text2text-generation #translation #tr #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### tur-lit\n\n\n* source group: Turkish\n* target group: Lithuanian\n* OPUS readme: tur-lit\n* model: transformer-align\n* source language(s): tur\n* target language(s): lit\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.6, chr-F: 0.631### System Info:\n\n\n* hf\\_name: tur-lit\n* source\\_languages: tur\n* target\\_languages: lit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'lt']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'lit'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: lit\n* short\\_pair: tr-lt\n* chrF2\\_score: 0.631\n* bleu: 35.6\n* brevity\\_penalty: 0.9490000000000001\n* ref\\_len: 8285.0\n* src\\_name: Turkish\n* tgt\\_name: Lithuanian\n* train\\_date: 2020-06-17\n* src\\_alpha2: tr\n* tgt\\_alpha2: lt\n* prefer\\_old: False\n* long\\_pair: tur-lit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-tr-sv * source languages: tr * target languages: sv * OPUS readme: [tr-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tr.sv | 26.3 | 0.478 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tr-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tr", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tr #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tr-sv * source languages: tr * target languages: sv * OPUS readme: tr-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.3, chr-F: 0.478
[ "### opus-mt-tr-sv\n\n\n* source languages: tr\n* target languages: sv\n* OPUS readme: tr-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.478" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tr-sv\n\n\n* source languages: tr\n* target languages: sv\n* OPUS readme: tr-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.478" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tr-sv\n\n\n* source languages: tr\n* target languages: sv\n* OPUS readme: tr-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.478" ]
translation
transformers
### tur-ukr * source group: Turkish * target group: Ukrainian * OPUS readme: [tur-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md) * model: transformer-align * source language(s): tur * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.tur.ukr | 42.5 | 0.624 | ### System Info: - hf_name: tur-ukr - source_languages: tur - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'uk'] - src_constituents: {'tur'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-ukr/opus-2020-06-17.test.txt - src_alpha3: tur - tgt_alpha3: ukr - short_pair: tr-uk - chrF2_score: 0.624 - bleu: 42.5 - brevity_penalty: 0.983 - ref_len: 12988.0 - src_name: Turkish - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: tr - tgt_alpha2: uk - prefer_old: False - long_pair: tur-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["tr", "uk"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tr-uk
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tr", "uk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "tr", "uk" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tr #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### tur-ukr * source group: Turkish * target group: Ukrainian * OPUS readme: tur-ukr * model: transformer-align * source language(s): tur * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 42.5, chr-F: 0.624 ### System Info: * hf\_name: tur-ukr * source\_languages: tur * target\_languages: ukr * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['tr', 'uk'] * src\_constituents: {'tur'} * tgt\_constituents: {'ukr'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: tur * tgt\_alpha3: ukr * short\_pair: tr-uk * chrF2\_score: 0.624 * bleu: 42.5 * brevity\_penalty: 0.983 * ref\_len: 12988.0 * src\_name: Turkish * tgt\_name: Ukrainian * train\_date: 2020-06-17 * src\_alpha2: tr * tgt\_alpha2: uk * prefer\_old: False * long\_pair: tur-ukr * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### tur-ukr\n\n\n* source group: Turkish\n* target group: Ukrainian\n* OPUS readme: tur-ukr\n* model: transformer-align\n* source language(s): tur\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.5, chr-F: 0.624", "### System Info:\n\n\n* hf\\_name: tur-ukr\n* source\\_languages: tur\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'uk']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: ukr\n* short\\_pair: tr-uk\n* chrF2\\_score: 0.624\n* bleu: 42.5\n* brevity\\_penalty: 0.983\n* ref\\_len: 12988.0\n* src\\_name: Turkish\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: tr\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: tur-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### tur-ukr\n\n\n* source group: Turkish\n* target group: Ukrainian\n* OPUS readme: tur-ukr\n* model: transformer-align\n* source language(s): tur\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.5, chr-F: 0.624", "### System Info:\n\n\n* hf\\_name: tur-ukr\n* source\\_languages: tur\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'uk']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: ukr\n* short\\_pair: tr-uk\n* chrF2\\_score: 0.624\n* bleu: 42.5\n* brevity\\_penalty: 0.983\n* ref\\_len: 12988.0\n* src\\_name: Turkish\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: tr\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: tur-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 402 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tr #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### tur-ukr\n\n\n* source group: Turkish\n* target group: Ukrainian\n* OPUS readme: tur-ukr\n* model: transformer-align\n* source language(s): tur\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.5, chr-F: 0.624### System Info:\n\n\n* hf\\_name: tur-ukr\n* source\\_languages: tur\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tr', 'uk']\n* src\\_constituents: {'tur'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: tur\n* tgt\\_alpha3: ukr\n* short\\_pair: tr-uk\n* chrF2\\_score: 0.624\n* bleu: 42.5\n* brevity\\_penalty: 0.983\n* ref\\_len: 12988.0\n* src\\_name: Turkish\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: tr\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: tur-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### trk-eng * source group: Turkic languages * target group: English * OPUS readme: [trk-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/trk-eng/README.md) * model: transformer * source language(s): aze_Latn bak chv crh crh_Latn kaz_Cyrl kaz_Latn kir_Cyrl kjh kum ota_Arab ota_Latn sah tat tat_Arab tat_Latn tuk tuk_Latn tur tyv uig_Arab uig_Cyrl uzb_Cyrl uzb_Latn * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-entr-tureng.tur.eng | 5.0 | 0.242 | | newstest2016-entr-tureng.tur.eng | 3.7 | 0.231 | | newstest2017-entr-tureng.tur.eng | 3.7 | 0.229 | | newstest2018-entr-tureng.tur.eng | 4.1 | 0.230 | | Tatoeba-test.aze-eng.aze.eng | 15.1 | 0.330 | | Tatoeba-test.bak-eng.bak.eng | 3.3 | 0.185 | | Tatoeba-test.chv-eng.chv.eng | 1.3 | 0.161 | | Tatoeba-test.crh-eng.crh.eng | 10.8 | 0.325 | | Tatoeba-test.kaz-eng.kaz.eng | 9.6 | 0.264 | | Tatoeba-test.kir-eng.kir.eng | 15.3 | 0.328 | | Tatoeba-test.kjh-eng.kjh.eng | 1.8 | 0.121 | | Tatoeba-test.kum-eng.kum.eng | 16.1 | 0.277 | | Tatoeba-test.multi.eng | 12.0 | 0.304 | | Tatoeba-test.ota-eng.ota.eng | 2.0 | 0.149 | | Tatoeba-test.sah-eng.sah.eng | 0.7 | 0.140 | | Tatoeba-test.tat-eng.tat.eng | 4.0 | 0.215 | | Tatoeba-test.tuk-eng.tuk.eng | 5.5 | 0.243 | | Tatoeba-test.tur-eng.tur.eng | 26.8 | 0.443 | | Tatoeba-test.tyv-eng.tyv.eng | 1.3 | 0.111 | | Tatoeba-test.uig-eng.uig.eng | 0.2 | 0.111 | | Tatoeba-test.uzb-eng.uzb.eng | 4.6 | 0.195 | ### System Info: - hf_name: trk-eng - source_languages: trk - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/trk-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tt', 'cv', 'tk', 'tr', 'ba', 'trk', 'en'] - src_constituents: {'kir_Cyrl', 'tat_Latn', 'tat', 'chv', 'uzb_Cyrl', 'kaz_Latn', 'aze_Latn', 'crh', 'kjh', 'uzb_Latn', 'ota_Arab', 'tuk_Latn', 'tuk', 'tat_Arab', 'sah', 'tyv', 'tur', 'uig_Arab', 'crh_Latn', 'kaz_Cyrl', 'uig_Cyrl', 'kum', 'ota_Latn', 'bak'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/trk-eng/opus2m-2020-08-01.test.txt - src_alpha3: trk - tgt_alpha3: eng - short_pair: trk-en - chrF2_score: 0.304 - bleu: 12.0 - brevity_penalty: 1.0 - ref_len: 18733.0 - src_name: Turkic languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: trk - tgt_alpha2: en - prefer_old: False - long_pair: trk-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["tt", "cv", "tk", "tr", "ba", "trk", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-trk-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tt", "cv", "tk", "tr", "ba", "trk", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "tt", "cv", "tk", "tr", "ba", "trk", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tt #cv #tk #tr #ba #trk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### trk-eng * source group: Turkic languages * target group: English * OPUS readme: trk-eng * model: transformer * source language(s): aze\_Latn bak chv crh crh\_Latn kaz\_Cyrl kaz\_Latn kir\_Cyrl kjh kum ota\_Arab ota\_Latn sah tat tat\_Arab tat\_Latn tuk tuk\_Latn tur tyv uig\_Arab uig\_Cyrl uzb\_Cyrl uzb\_Latn * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 5.0, chr-F: 0.242 testset: URL, BLEU: 3.7, chr-F: 0.231 testset: URL, BLEU: 3.7, chr-F: 0.229 testset: URL, BLEU: 4.1, chr-F: 0.230 testset: URL, BLEU: 15.1, chr-F: 0.330 testset: URL, BLEU: 3.3, chr-F: 0.185 testset: URL, BLEU: 1.3, chr-F: 0.161 testset: URL, BLEU: 10.8, chr-F: 0.325 testset: URL, BLEU: 9.6, chr-F: 0.264 testset: URL, BLEU: 15.3, chr-F: 0.328 testset: URL, BLEU: 1.8, chr-F: 0.121 testset: URL, BLEU: 16.1, chr-F: 0.277 testset: URL, BLEU: 12.0, chr-F: 0.304 testset: URL, BLEU: 2.0, chr-F: 0.149 testset: URL, BLEU: 0.7, chr-F: 0.140 testset: URL, BLEU: 4.0, chr-F: 0.215 testset: URL, BLEU: 5.5, chr-F: 0.243 testset: URL, BLEU: 26.8, chr-F: 0.443 testset: URL, BLEU: 1.3, chr-F: 0.111 testset: URL, BLEU: 0.2, chr-F: 0.111 testset: URL, BLEU: 4.6, chr-F: 0.195 ### System Info: * hf\_name: trk-eng * source\_languages: trk * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['tt', 'cv', 'tk', 'tr', 'ba', 'trk', 'en'] * src\_constituents: {'kir\_Cyrl', 'tat\_Latn', 'tat', 'chv', 'uzb\_Cyrl', 'kaz\_Latn', 'aze\_Latn', 'crh', 'kjh', 'uzb\_Latn', 'ota\_Arab', 'tuk\_Latn', 'tuk', 'tat\_Arab', 'sah', 'tyv', 'tur', 'uig\_Arab', 'crh\_Latn', 'kaz\_Cyrl', 'uig\_Cyrl', 'kum', 'ota\_Latn', 'bak'} * tgt\_constituents: {'eng'} * src\_multilingual: True * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: trk * tgt\_alpha3: eng * short\_pair: trk-en * chrF2\_score: 0.304 * bleu: 12.0 * brevity\_penalty: 1.0 * ref\_len: 18733.0 * src\_name: Turkic languages * tgt\_name: English * train\_date: 2020-08-01 * src\_alpha2: trk * tgt\_alpha2: en * prefer\_old: False * long\_pair: trk-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### trk-eng\n\n\n* source group: Turkic languages\n* target group: English\n* OPUS readme: trk-eng\n* model: transformer\n* source language(s): aze\\_Latn bak chv crh crh\\_Latn kaz\\_Cyrl kaz\\_Latn kir\\_Cyrl kjh kum ota\\_Arab ota\\_Latn sah tat tat\\_Arab tat\\_Latn tuk tuk\\_Latn tur tyv uig\\_Arab uig\\_Cyrl uzb\\_Cyrl uzb\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 5.0, chr-F: 0.242\ntestset: URL, BLEU: 3.7, chr-F: 0.231\ntestset: URL, BLEU: 3.7, chr-F: 0.229\ntestset: URL, BLEU: 4.1, chr-F: 0.230\ntestset: URL, BLEU: 15.1, chr-F: 0.330\ntestset: URL, BLEU: 3.3, chr-F: 0.185\ntestset: URL, BLEU: 1.3, chr-F: 0.161\ntestset: URL, BLEU: 10.8, chr-F: 0.325\ntestset: URL, BLEU: 9.6, chr-F: 0.264\ntestset: URL, BLEU: 15.3, chr-F: 0.328\ntestset: URL, BLEU: 1.8, chr-F: 0.121\ntestset: URL, BLEU: 16.1, chr-F: 0.277\ntestset: URL, BLEU: 12.0, chr-F: 0.304\ntestset: URL, BLEU: 2.0, chr-F: 0.149\ntestset: URL, BLEU: 0.7, chr-F: 0.140\ntestset: URL, BLEU: 4.0, chr-F: 0.215\ntestset: URL, BLEU: 5.5, chr-F: 0.243\ntestset: URL, BLEU: 26.8, chr-F: 0.443\ntestset: URL, BLEU: 1.3, chr-F: 0.111\ntestset: URL, BLEU: 0.2, chr-F: 0.111\ntestset: URL, BLEU: 4.6, chr-F: 0.195", "### System Info:\n\n\n* hf\\_name: trk-eng\n* source\\_languages: trk\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tt', 'cv', 'tk', 'tr', 'ba', 'trk', 'en']\n* src\\_constituents: {'kir\\_Cyrl', 'tat\\_Latn', 'tat', 'chv', 'uzb\\_Cyrl', 'kaz\\_Latn', 'aze\\_Latn', 'crh', 'kjh', 'uzb\\_Latn', 'ota\\_Arab', 'tuk\\_Latn', 'tuk', 'tat\\_Arab', 'sah', 'tyv', 'tur', 'uig\\_Arab', 'crh\\_Latn', 'kaz\\_Cyrl', 'uig\\_Cyrl', 'kum', 'ota\\_Latn', 'bak'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: trk\n* tgt\\_alpha3: eng\n* short\\_pair: trk-en\n* chrF2\\_score: 0.304\n* bleu: 12.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 18733.0\n* src\\_name: Turkic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: trk\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: trk-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tt #cv #tk #tr #ba #trk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### trk-eng\n\n\n* source group: Turkic languages\n* target group: English\n* OPUS readme: trk-eng\n* model: transformer\n* source language(s): aze\\_Latn bak chv crh crh\\_Latn kaz\\_Cyrl kaz\\_Latn kir\\_Cyrl kjh kum ota\\_Arab ota\\_Latn sah tat tat\\_Arab tat\\_Latn tuk tuk\\_Latn tur tyv uig\\_Arab uig\\_Cyrl uzb\\_Cyrl uzb\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 5.0, chr-F: 0.242\ntestset: URL, BLEU: 3.7, chr-F: 0.231\ntestset: URL, BLEU: 3.7, chr-F: 0.229\ntestset: URL, BLEU: 4.1, chr-F: 0.230\ntestset: URL, BLEU: 15.1, chr-F: 0.330\ntestset: URL, BLEU: 3.3, chr-F: 0.185\ntestset: URL, BLEU: 1.3, chr-F: 0.161\ntestset: URL, BLEU: 10.8, chr-F: 0.325\ntestset: URL, BLEU: 9.6, chr-F: 0.264\ntestset: URL, BLEU: 15.3, chr-F: 0.328\ntestset: URL, BLEU: 1.8, chr-F: 0.121\ntestset: URL, BLEU: 16.1, chr-F: 0.277\ntestset: URL, BLEU: 12.0, chr-F: 0.304\ntestset: URL, BLEU: 2.0, chr-F: 0.149\ntestset: URL, BLEU: 0.7, chr-F: 0.140\ntestset: URL, BLEU: 4.0, chr-F: 0.215\ntestset: URL, BLEU: 5.5, chr-F: 0.243\ntestset: URL, BLEU: 26.8, chr-F: 0.443\ntestset: URL, BLEU: 1.3, chr-F: 0.111\ntestset: URL, BLEU: 0.2, chr-F: 0.111\ntestset: URL, BLEU: 4.6, chr-F: 0.195", "### System Info:\n\n\n* hf\\_name: trk-eng\n* source\\_languages: trk\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tt', 'cv', 'tk', 'tr', 'ba', 'trk', 'en']\n* src\\_constituents: {'kir\\_Cyrl', 'tat\\_Latn', 'tat', 'chv', 'uzb\\_Cyrl', 'kaz\\_Latn', 'aze\\_Latn', 'crh', 'kjh', 'uzb\\_Latn', 'ota\\_Arab', 'tuk\\_Latn', 'tuk', 'tat\\_Arab', 'sah', 'tyv', 'tur', 'uig\\_Arab', 'crh\\_Latn', 'kaz\\_Cyrl', 'uig\\_Cyrl', 'kum', 'ota\\_Latn', 'bak'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: trk\n* tgt\\_alpha3: eng\n* short\\_pair: trk-en\n* chrF2\\_score: 0.304\n* bleu: 12.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 18733.0\n* src\\_name: Turkic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: trk\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: trk-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 63, 680, 597 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tt #cv #tk #tr #ba #trk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### trk-eng\n\n\n* source group: Turkic languages\n* target group: English\n* OPUS readme: trk-eng\n* model: transformer\n* source language(s): aze\\_Latn bak chv crh crh\\_Latn kaz\\_Cyrl kaz\\_Latn kir\\_Cyrl kjh kum ota\\_Arab ota\\_Latn sah tat tat\\_Arab tat\\_Latn tuk tuk\\_Latn tur tyv uig\\_Arab uig\\_Cyrl uzb\\_Cyrl uzb\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 5.0, chr-F: 0.242\ntestset: URL, BLEU: 3.7, chr-F: 0.231\ntestset: URL, BLEU: 3.7, chr-F: 0.229\ntestset: URL, BLEU: 4.1, chr-F: 0.230\ntestset: URL, BLEU: 15.1, chr-F: 0.330\ntestset: URL, BLEU: 3.3, chr-F: 0.185\ntestset: URL, BLEU: 1.3, chr-F: 0.161\ntestset: URL, BLEU: 10.8, chr-F: 0.325\ntestset: URL, BLEU: 9.6, chr-F: 0.264\ntestset: URL, BLEU: 15.3, chr-F: 0.328\ntestset: URL, BLEU: 1.8, chr-F: 0.121\ntestset: URL, BLEU: 16.1, chr-F: 0.277\ntestset: URL, BLEU: 12.0, chr-F: 0.304\ntestset: URL, BLEU: 2.0, chr-F: 0.149\ntestset: URL, BLEU: 0.7, chr-F: 0.140\ntestset: URL, BLEU: 4.0, chr-F: 0.215\ntestset: URL, BLEU: 5.5, chr-F: 0.243\ntestset: URL, BLEU: 26.8, chr-F: 0.443\ntestset: URL, BLEU: 1.3, chr-F: 0.111\ntestset: URL, BLEU: 0.2, chr-F: 0.111\ntestset: URL, BLEU: 4.6, chr-F: 0.195### System Info:\n\n\n* hf\\_name: trk-eng\n* source\\_languages: trk\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['tt', 'cv', 'tk', 'tr', 'ba', 'trk', 'en']\n* src\\_constituents: {'kir\\_Cyrl', 'tat\\_Latn', 'tat', 'chv', 'uzb\\_Cyrl', 'kaz\\_Latn', 'aze\\_Latn', 'crh', 'kjh', 'uzb\\_Latn', 'ota\\_Arab', 'tuk\\_Latn', 'tuk', 'tat\\_Arab', 'sah', 'tyv', 'tur', 'uig\\_Arab', 'crh\\_Latn', 'kaz\\_Cyrl', 'uig\\_Cyrl', 'kum', 'ota\\_Latn', 'bak'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: trk\n* tgt\\_alpha3: eng\n* short\\_pair: trk-en\n* chrF2\\_score: 0.304\n* bleu: 12.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 18733.0\n* src\\_name: Turkic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: trk\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: trk-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-ts-en * source languages: ts * target languages: en * OPUS readme: [ts-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ts.en | 44.0 | 0.590 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ts-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ts", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ts #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ts-en * source languages: ts * target languages: en * OPUS readme: ts-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 44.0, chr-F: 0.590
[ "### opus-mt-ts-en\n\n\n* source languages: ts\n* target languages: en\n* OPUS readme: ts-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.0, chr-F: 0.590" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ts #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ts-en\n\n\n* source languages: ts\n* target languages: en\n* OPUS readme: ts-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.0, chr-F: 0.590" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ts #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ts-en\n\n\n* source languages: ts\n* target languages: en\n* OPUS readme: ts-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 44.0, chr-F: 0.590" ]
translation
transformers
### opus-mt-ts-es * source languages: ts * target languages: es * OPUS readme: [ts-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ts.es | 28.1 | 0.468 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ts-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ts", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ts #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ts-es * source languages: ts * target languages: es * OPUS readme: ts-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 28.1, chr-F: 0.468
[ "### opus-mt-ts-es\n\n\n* source languages: ts\n* target languages: es\n* OPUS readme: ts-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.1, chr-F: 0.468" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ts #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ts-es\n\n\n* source languages: ts\n* target languages: es\n* OPUS readme: ts-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.1, chr-F: 0.468" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ts #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ts-es\n\n\n* source languages: ts\n* target languages: es\n* OPUS readme: ts-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.1, chr-F: 0.468" ]
translation
transformers
### opus-mt-ts-fi * source languages: ts * target languages: fi * OPUS readme: [ts-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ts.fi | 27.7 | 0.509 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ts-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ts", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ts #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ts-fi * source languages: ts * target languages: fi * OPUS readme: ts-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.7, chr-F: 0.509
[ "### opus-mt-ts-fi\n\n\n* source languages: ts\n* target languages: fi\n* OPUS readme: ts-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.509" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ts #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ts-fi\n\n\n* source languages: ts\n* target languages: fi\n* OPUS readme: ts-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.509" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ts #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ts-fi\n\n\n* source languages: ts\n* target languages: fi\n* OPUS readme: ts-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.7, chr-F: 0.509" ]
translation
transformers
### opus-mt-ts-fr * source languages: ts * target languages: fr * OPUS readme: [ts-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ts.fr | 29.9 | 0.475 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ts-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ts", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ts #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ts-fr * source languages: ts * target languages: fr * OPUS readme: ts-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 29.9, chr-F: 0.475
[ "### opus-mt-ts-fr\n\n\n* source languages: ts\n* target languages: fr\n* OPUS readme: ts-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.9, chr-F: 0.475" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ts #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ts-fr\n\n\n* source languages: ts\n* target languages: fr\n* OPUS readme: ts-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.9, chr-F: 0.475" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ts #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ts-fr\n\n\n* source languages: ts\n* target languages: fr\n* OPUS readme: ts-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.9, chr-F: 0.475" ]
translation
transformers
### opus-mt-ts-sv * source languages: ts * target languages: sv * OPUS readme: [ts-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ts.sv | 32.6 | 0.510 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ts-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ts", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ts #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ts-sv * source languages: ts * target languages: sv * OPUS readme: ts-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 32.6, chr-F: 0.510
[ "### opus-mt-ts-sv\n\n\n* source languages: ts\n* target languages: sv\n* OPUS readme: ts-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.510" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ts #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ts-sv\n\n\n* source languages: ts\n* target languages: sv\n* OPUS readme: ts-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.510" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ts #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ts-sv\n\n\n* source languages: ts\n* target languages: sv\n* OPUS readme: ts-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.510" ]
translation
transformers
### opus-mt-tum-en * source languages: tum * target languages: en * OPUS readme: [tum-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-en/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-en/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-en/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tum.en | 31.7 | 0.470 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tum-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tum", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tum #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tum-en * source languages: tum * target languages: en * OPUS readme: tum-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 31.7, chr-F: 0.470
[ "### opus-mt-tum-en\n\n\n* source languages: tum\n* target languages: en\n* OPUS readme: tum-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.7, chr-F: 0.470" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tum #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tum-en\n\n\n* source languages: tum\n* target languages: en\n* OPUS readme: tum-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.7, chr-F: 0.470" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tum #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tum-en\n\n\n* source languages: tum\n* target languages: en\n* OPUS readme: tum-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.7, chr-F: 0.470" ]
translation
transformers
### opus-mt-tum-es * source languages: tum * target languages: es * OPUS readme: [tum-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tum.es | 22.6 | 0.390 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tum-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tum", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tum #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tum-es * source languages: tum * target languages: es * OPUS readme: tum-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.6, chr-F: 0.390
[ "### opus-mt-tum-es\n\n\n* source languages: tum\n* target languages: es\n* OPUS readme: tum-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.390" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tum #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tum-es\n\n\n* source languages: tum\n* target languages: es\n* OPUS readme: tum-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.390" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tum #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tum-es\n\n\n* source languages: tum\n* target languages: es\n* OPUS readme: tum-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.390" ]
translation
transformers
### opus-mt-tum-fr * source languages: tum * target languages: fr * OPUS readme: [tum-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tum.fr | 24.0 | 0.403 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tum-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tum", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tum #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tum-fr * source languages: tum * target languages: fr * OPUS readme: tum-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.0, chr-F: 0.403
[ "### opus-mt-tum-fr\n\n\n* source languages: tum\n* target languages: fr\n* OPUS readme: tum-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.0, chr-F: 0.403" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tum #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tum-fr\n\n\n* source languages: tum\n* target languages: fr\n* OPUS readme: tum-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.0, chr-F: 0.403" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tum #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tum-fr\n\n\n* source languages: tum\n* target languages: fr\n* OPUS readme: tum-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.0, chr-F: 0.403" ]
translation
transformers
### opus-mt-tum-sv * source languages: tum * target languages: sv * OPUS readme: [tum-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tum-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tum-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tum-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tum.sv | 23.3 | 0.410 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tum-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tum", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tum #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tum-sv * source languages: tum * target languages: sv * OPUS readme: tum-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.3, chr-F: 0.410
[ "### opus-mt-tum-sv\n\n\n* source languages: tum\n* target languages: sv\n* OPUS readme: tum-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.410" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tum #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tum-sv\n\n\n* source languages: tum\n* target languages: sv\n* OPUS readme: tum-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.410" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tum #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tum-sv\n\n\n* source languages: tum\n* target languages: sv\n* OPUS readme: tum-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.410" ]
translation
transformers
### opus-mt-tvl-en * source languages: tvl * target languages: en * OPUS readme: [tvl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-en/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-en/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-en/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tvl.en | 37.3 | 0.528 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tvl-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tvl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tvl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tvl-en * source languages: tvl * target languages: en * OPUS readme: tvl-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 37.3, chr-F: 0.528
[ "### opus-mt-tvl-en\n\n\n* source languages: tvl\n* target languages: en\n* OPUS readme: tvl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.3, chr-F: 0.528" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tvl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tvl-en\n\n\n* source languages: tvl\n* target languages: en\n* OPUS readme: tvl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.3, chr-F: 0.528" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tvl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tvl-en\n\n\n* source languages: tvl\n* target languages: en\n* OPUS readme: tvl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.3, chr-F: 0.528" ]
translation
transformers
### opus-mt-tvl-es * source languages: tvl * target languages: es * OPUS readme: [tvl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tvl.es | 21.0 | 0.388 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tvl-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tvl", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tvl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tvl-es * source languages: tvl * target languages: es * OPUS readme: tvl-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 21.0, chr-F: 0.388
[ "### opus-mt-tvl-es\n\n\n* source languages: tvl\n* target languages: es\n* OPUS readme: tvl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.0, chr-F: 0.388" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tvl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tvl-es\n\n\n* source languages: tvl\n* target languages: es\n* OPUS readme: tvl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.0, chr-F: 0.388" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tvl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tvl-es\n\n\n* source languages: tvl\n* target languages: es\n* OPUS readme: tvl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.0, chr-F: 0.388" ]
translation
transformers
### opus-mt-tvl-fi * source languages: tvl * target languages: fi * OPUS readme: [tvl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tvl.fi | 22.0 | 0.439 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tvl-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tvl", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tvl #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tvl-fi * source languages: tvl * target languages: fi * OPUS readme: tvl-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.0, chr-F: 0.439
[ "### opus-mt-tvl-fi\n\n\n* source languages: tvl\n* target languages: fi\n* OPUS readme: tvl-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.439" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tvl #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tvl-fi\n\n\n* source languages: tvl\n* target languages: fi\n* OPUS readme: tvl-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.439" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tvl #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tvl-fi\n\n\n* source languages: tvl\n* target languages: fi\n* OPUS readme: tvl-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.439" ]
translation
transformers
### opus-mt-tvl-fr * source languages: tvl * target languages: fr * OPUS readme: [tvl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tvl.fr | 24.0 | 0.410 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tvl-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tvl", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tvl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tvl-fr * source languages: tvl * target languages: fr * OPUS readme: tvl-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.0, chr-F: 0.410
[ "### opus-mt-tvl-fr\n\n\n* source languages: tvl\n* target languages: fr\n* OPUS readme: tvl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.0, chr-F: 0.410" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tvl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tvl-fr\n\n\n* source languages: tvl\n* target languages: fr\n* OPUS readme: tvl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.0, chr-F: 0.410" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tvl #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tvl-fr\n\n\n* source languages: tvl\n* target languages: fr\n* OPUS readme: tvl-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.0, chr-F: 0.410" ]
translation
transformers
### opus-mt-tvl-sv * source languages: tvl * target languages: sv * OPUS readme: [tvl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tvl-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tvl-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tvl-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tvl.sv | 24.7 | 0.427 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tvl-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tvl", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tvl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tvl-sv * source languages: tvl * target languages: sv * OPUS readme: tvl-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.7, chr-F: 0.427
[ "### opus-mt-tvl-sv\n\n\n* source languages: tvl\n* target languages: sv\n* OPUS readme: tvl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.7, chr-F: 0.427" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tvl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tvl-sv\n\n\n* source languages: tvl\n* target languages: sv\n* OPUS readme: tvl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.7, chr-F: 0.427" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tvl #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tvl-sv\n\n\n* source languages: tvl\n* target languages: sv\n* OPUS readme: tvl-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.7, chr-F: 0.427" ]
translation
transformers
### opus-mt-tw-es * source languages: tw * target languages: es * OPUS readme: [tw-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tw.es | 25.9 | 0.441 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tw-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tw", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tw #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tw-es * source languages: tw * target languages: es * OPUS readme: tw-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.9, chr-F: 0.441
[ "### opus-mt-tw-es\n\n\n* source languages: tw\n* target languages: es\n* OPUS readme: tw-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.9, chr-F: 0.441" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tw #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tw-es\n\n\n* source languages: tw\n* target languages: es\n* OPUS readme: tw-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.9, chr-F: 0.441" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tw #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tw-es\n\n\n* source languages: tw\n* target languages: es\n* OPUS readme: tw-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.9, chr-F: 0.441" ]
translation
transformers
### opus-mt-tw-fi * source languages: tw * target languages: fi * OPUS readme: [tw-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-fi/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-fi/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-fi/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tw.fi | 25.6 | 0.488 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tw-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tw", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tw #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tw-fi * source languages: tw * target languages: fi * OPUS readme: tw-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.6, chr-F: 0.488
[ "### opus-mt-tw-fi\n\n\n* source languages: tw\n* target languages: fi\n* OPUS readme: tw-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.488" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tw #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tw-fi\n\n\n* source languages: tw\n* target languages: fi\n* OPUS readme: tw-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.488" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tw #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tw-fi\n\n\n* source languages: tw\n* target languages: fi\n* OPUS readme: tw-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.6, chr-F: 0.488" ]
translation
transformers
### opus-mt-tw-fr * source languages: tw * target languages: fr * OPUS readme: [tw-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tw.fr | 26.7 | 0.442 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tw-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tw", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tw #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tw-fr * source languages: tw * target languages: fr * OPUS readme: tw-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.7, chr-F: 0.442
[ "### opus-mt-tw-fr\n\n\n* source languages: tw\n* target languages: fr\n* OPUS readme: tw-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.442" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tw #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tw-fr\n\n\n* source languages: tw\n* target languages: fr\n* OPUS readme: tw-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.442" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tw #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tw-fr\n\n\n* source languages: tw\n* target languages: fr\n* OPUS readme: tw-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.7, chr-F: 0.442" ]
translation
transformers
### opus-mt-tw-sv * source languages: tw * target languages: sv * OPUS readme: [tw-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tw-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tw-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tw-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tw.sv | 29.0 | 0.471 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tw-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tw", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tw #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tw-sv * source languages: tw * target languages: sv * OPUS readme: tw-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 29.0, chr-F: 0.471
[ "### opus-mt-tw-sv\n\n\n* source languages: tw\n* target languages: sv\n* OPUS readme: tw-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.0, chr-F: 0.471" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tw #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tw-sv\n\n\n* source languages: tw\n* target languages: sv\n* OPUS readme: tw-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.0, chr-F: 0.471" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tw #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tw-sv\n\n\n* source languages: tw\n* target languages: sv\n* OPUS readme: tw-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.0, chr-F: 0.471" ]
translation
transformers
### opus-mt-ty-es * source languages: ty * target languages: es * OPUS readme: [ty-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ty-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ty-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ty.es | 27.3 | 0.457 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ty-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ty", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ty #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ty-es * source languages: ty * target languages: es * OPUS readme: ty-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.3, chr-F: 0.457
[ "### opus-mt-ty-es\n\n\n* source languages: ty\n* target languages: es\n* OPUS readme: ty-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.457" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ty #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ty-es\n\n\n* source languages: ty\n* target languages: es\n* OPUS readme: ty-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.457" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ty #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ty-es\n\n\n* source languages: ty\n* target languages: es\n* OPUS readme: ty-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.3, chr-F: 0.457" ]
translation
transformers
### opus-mt-ty-fi * source languages: ty * target languages: fi * OPUS readme: [ty-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ty-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ty-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ty.fi | 21.7 | 0.451 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ty-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ty", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ty #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ty-fi * source languages: ty * target languages: fi * OPUS readme: ty-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 21.7, chr-F: 0.451
[ "### opus-mt-ty-fi\n\n\n* source languages: ty\n* target languages: fi\n* OPUS readme: ty-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.451" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ty #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ty-fi\n\n\n* source languages: ty\n* target languages: fi\n* OPUS readme: ty-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.451" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ty #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ty-fi\n\n\n* source languages: ty\n* target languages: fi\n* OPUS readme: ty-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.7, chr-F: 0.451" ]
translation
transformers
### opus-mt-ty-fr * source languages: ty * target languages: fr * OPUS readme: [ty-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ty-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ty-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ty.fr | 30.2 | 0.480 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ty-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ty", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ty #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ty-fr * source languages: ty * target languages: fr * OPUS readme: ty-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 30.2, chr-F: 0.480
[ "### opus-mt-ty-fr\n\n\n* source languages: ty\n* target languages: fr\n* OPUS readme: ty-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.480" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ty #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ty-fr\n\n\n* source languages: ty\n* target languages: fr\n* OPUS readme: ty-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.480" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ty #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ty-fr\n\n\n* source languages: ty\n* target languages: fr\n* OPUS readme: ty-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.480" ]
translation
transformers
### opus-mt-ty-sv * source languages: ty * target languages: sv * OPUS readme: [ty-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ty-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ty-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ty-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ty.sv | 28.9 | 0.472 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ty-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ty", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ty #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ty-sv * source languages: ty * target languages: sv * OPUS readme: ty-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 28.9, chr-F: 0.472
[ "### opus-mt-ty-sv\n\n\n* source languages: ty\n* target languages: sv\n* OPUS readme: ty-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.472" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ty #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ty-sv\n\n\n* source languages: ty\n* target languages: sv\n* OPUS readme: ty-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.472" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ty #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ty-sv\n\n\n* source languages: ty\n* target languages: sv\n* OPUS readme: ty-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.9, chr-F: 0.472" ]
translation
transformers
### opus-mt-tzo-es * source languages: tzo * target languages: es * OPUS readme: [tzo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tzo-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tzo.es | 20.8 | 0.381 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-tzo-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tzo", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #tzo #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-tzo-es * source languages: tzo * target languages: es * OPUS readme: tzo-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.8, chr-F: 0.381
[ "### opus-mt-tzo-es\n\n\n* source languages: tzo\n* target languages: es\n* OPUS readme: tzo-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.8, chr-F: 0.381" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tzo #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-tzo-es\n\n\n* source languages: tzo\n* target languages: es\n* OPUS readme: tzo-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.8, chr-F: 0.381" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #tzo #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-tzo-es\n\n\n* source languages: tzo\n* target languages: es\n* OPUS readme: tzo-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.8, chr-F: 0.381" ]
translation
transformers
### ukr-bul * source group: Ukrainian * target group: Bulgarian * OPUS readme: [ukr-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-bul/README.md) * model: transformer-align * source language(s): ukr * target language(s): bul * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.bul | 55.7 | 0.734 | ### System Info: - hf_name: ukr-bul - source_languages: ukr - target_languages: bul - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-bul/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'bg'] - src_constituents: {'ukr'} - tgt_constituents: {'bul', 'bul_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-bul/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: bul - short_pair: uk-bg - chrF2_score: 0.7340000000000001 - bleu: 55.7 - brevity_penalty: 0.976 - ref_len: 5181.0 - src_name: Ukrainian - tgt_name: Bulgarian - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: bg - prefer_old: False - long_pair: ukr-bul - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "bg"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-bg
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "bg", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "bg" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-bul * source group: Ukrainian * target group: Bulgarian * OPUS readme: ukr-bul * model: transformer-align * source language(s): ukr * target language(s): bul * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 55.7, chr-F: 0.734 ### System Info: * hf\_name: ukr-bul * source\_languages: ukr * target\_languages: bul * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'bg'] * src\_constituents: {'ukr'} * tgt\_constituents: {'bul', 'bul\_Latn'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: bul * short\_pair: uk-bg * chrF2\_score: 0.7340000000000001 * bleu: 55.7 * brevity\_penalty: 0.976 * ref\_len: 5181.0 * src\_name: Ukrainian * tgt\_name: Bulgarian * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: bg * prefer\_old: False * long\_pair: ukr-bul * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-bul\n\n\n* source group: Ukrainian\n* target group: Bulgarian\n* OPUS readme: ukr-bul\n* model: transformer-align\n* source language(s): ukr\n* target language(s): bul\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 55.7, chr-F: 0.734", "### System Info:\n\n\n* hf\\_name: ukr-bul\n* source\\_languages: ukr\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'bg']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: bul\n* short\\_pair: uk-bg\n* chrF2\\_score: 0.7340000000000001\n* bleu: 55.7\n* brevity\\_penalty: 0.976\n* ref\\_len: 5181.0\n* src\\_name: Ukrainian\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: ukr-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-bul\n\n\n* source group: Ukrainian\n* target group: Bulgarian\n* OPUS readme: ukr-bul\n* model: transformer-align\n* source language(s): ukr\n* target language(s): bul\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 55.7, chr-F: 0.734", "### System Info:\n\n\n* hf\\_name: ukr-bul\n* source\\_languages: ukr\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'bg']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: bul\n* short\\_pair: uk-bg\n* chrF2\\_score: 0.7340000000000001\n* bleu: 55.7\n* brevity\\_penalty: 0.976\n* ref\\_len: 5181.0\n* src\\_name: Ukrainian\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: ukr-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 52, 137, 421 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-bul\n\n\n* source group: Ukrainian\n* target group: Bulgarian\n* OPUS readme: ukr-bul\n* model: transformer-align\n* source language(s): ukr\n* target language(s): bul\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 55.7, chr-F: 0.734### System Info:\n\n\n* hf\\_name: ukr-bul\n* source\\_languages: ukr\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'bg']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: bul\n* short\\_pair: uk-bg\n* chrF2\\_score: 0.7340000000000001\n* bleu: 55.7\n* brevity\\_penalty: 0.976\n* ref\\_len: 5181.0\n* src\\_name: Ukrainian\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: ukr-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ukr-cat * source group: Ukrainian * target group: Catalan * OPUS readme: [ukr-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-cat/README.md) * model: transformer-align * source language(s): ukr * target language(s): cat * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.cat | 33.7 | 0.538 | ### System Info: - hf_name: ukr-cat - source_languages: ukr - target_languages: cat - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-cat/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'ca'] - src_constituents: {'ukr'} - tgt_constituents: {'cat'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-cat/opus-2020-06-16.test.txt - src_alpha3: ukr - tgt_alpha3: cat - short_pair: uk-ca - chrF2_score: 0.5379999999999999 - bleu: 33.7 - brevity_penalty: 0.972 - ref_len: 2670.0 - src_name: Ukrainian - tgt_name: Catalan - train_date: 2020-06-16 - src_alpha2: uk - tgt_alpha2: ca - prefer_old: False - long_pair: ukr-cat - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "ca"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-ca
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "ca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "ca" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-cat * source group: Ukrainian * target group: Catalan * OPUS readme: ukr-cat * model: transformer-align * source language(s): ukr * target language(s): cat * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 33.7, chr-F: 0.538 ### System Info: * hf\_name: ukr-cat * source\_languages: ukr * target\_languages: cat * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'ca'] * src\_constituents: {'ukr'} * tgt\_constituents: {'cat'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: cat * short\_pair: uk-ca * chrF2\_score: 0.5379999999999999 * bleu: 33.7 * brevity\_penalty: 0.972 * ref\_len: 2670.0 * src\_name: Ukrainian * tgt\_name: Catalan * train\_date: 2020-06-16 * src\_alpha2: uk * tgt\_alpha2: ca * prefer\_old: False * long\_pair: ukr-cat * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-cat\n\n\n* source group: Ukrainian\n* target group: Catalan\n* OPUS readme: ukr-cat\n* model: transformer-align\n* source language(s): ukr\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.7, chr-F: 0.538", "### System Info:\n\n\n* hf\\_name: ukr-cat\n* source\\_languages: ukr\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'ca']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: cat\n* short\\_pair: uk-ca\n* chrF2\\_score: 0.5379999999999999\n* bleu: 33.7\n* brevity\\_penalty: 0.972\n* ref\\_len: 2670.0\n* src\\_name: Ukrainian\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-16\n* src\\_alpha2: uk\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: ukr-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-cat\n\n\n* source group: Ukrainian\n* target group: Catalan\n* OPUS readme: ukr-cat\n* model: transformer-align\n* source language(s): ukr\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.7, chr-F: 0.538", "### System Info:\n\n\n* hf\\_name: ukr-cat\n* source\\_languages: ukr\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'ca']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: cat\n* short\\_pair: uk-ca\n* chrF2\\_score: 0.5379999999999999\n* bleu: 33.7\n* brevity\\_penalty: 0.972\n* ref\\_len: 2670.0\n* src\\_name: Ukrainian\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-16\n* src\\_alpha2: uk\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: ukr-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 409 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #ca #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-cat\n\n\n* source group: Ukrainian\n* target group: Catalan\n* OPUS readme: ukr-cat\n* model: transformer-align\n* source language(s): ukr\n* target language(s): cat\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.7, chr-F: 0.538### System Info:\n\n\n* hf\\_name: ukr-cat\n* source\\_languages: ukr\n* target\\_languages: cat\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'ca']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'cat'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: cat\n* short\\_pair: uk-ca\n* chrF2\\_score: 0.5379999999999999\n* bleu: 33.7\n* brevity\\_penalty: 0.972\n* ref\\_len: 2670.0\n* src\\_name: Ukrainian\n* tgt\\_name: Catalan\n* train\\_date: 2020-06-16\n* src\\_alpha2: uk\n* tgt\\_alpha2: ca\n* prefer\\_old: False\n* long\\_pair: ukr-cat\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ukr-ces * source group: Ukrainian * target group: Czech * OPUS readme: [ukr-ces](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ces/README.md) * model: transformer-align * source language(s): ukr * target language(s): ces * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.ces | 52.0 | 0.686 | ### System Info: - hf_name: ukr-ces - source_languages: ukr - target_languages: ces - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ces/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'cs'] - src_constituents: {'ukr'} - tgt_constituents: {'ces'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ces/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: ces - short_pair: uk-cs - chrF2_score: 0.6859999999999999 - bleu: 52.0 - brevity_penalty: 0.993 - ref_len: 8550.0 - src_name: Ukrainian - tgt_name: Czech - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: cs - prefer_old: False - long_pair: ukr-ces - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "cs"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-cs
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "cs", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "cs" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #cs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-ces * source group: Ukrainian * target group: Czech * OPUS readme: ukr-ces * model: transformer-align * source language(s): ukr * target language(s): ces * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 52.0, chr-F: 0.686 ### System Info: * hf\_name: ukr-ces * source\_languages: ukr * target\_languages: ces * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'cs'] * src\_constituents: {'ukr'} * tgt\_constituents: {'ces'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: ces * short\_pair: uk-cs * chrF2\_score: 0.6859999999999999 * bleu: 52.0 * brevity\_penalty: 0.993 * ref\_len: 8550.0 * src\_name: Ukrainian * tgt\_name: Czech * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: cs * prefer\_old: False * long\_pair: ukr-ces * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-ces\n\n\n* source group: Ukrainian\n* target group: Czech\n* OPUS readme: ukr-ces\n* model: transformer-align\n* source language(s): ukr\n* target language(s): ces\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.0, chr-F: 0.686", "### System Info:\n\n\n* hf\\_name: ukr-ces\n* source\\_languages: ukr\n* target\\_languages: ces\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'cs']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'ces'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: ces\n* short\\_pair: uk-cs\n* chrF2\\_score: 0.6859999999999999\n* bleu: 52.0\n* brevity\\_penalty: 0.993\n* ref\\_len: 8550.0\n* src\\_name: Ukrainian\n* tgt\\_name: Czech\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: cs\n* prefer\\_old: False\n* long\\_pair: ukr-ces\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #cs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-ces\n\n\n* source group: Ukrainian\n* target group: Czech\n* OPUS readme: ukr-ces\n* model: transformer-align\n* source language(s): ukr\n* target language(s): ces\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.0, chr-F: 0.686", "### System Info:\n\n\n* hf\\_name: ukr-ces\n* source\\_languages: ukr\n* target\\_languages: ces\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'cs']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'ces'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: ces\n* short\\_pair: uk-cs\n* chrF2\\_score: 0.6859999999999999\n* bleu: 52.0\n* brevity\\_penalty: 0.993\n* ref\\_len: 8550.0\n* src\\_name: Ukrainian\n* tgt\\_name: Czech\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: cs\n* prefer\\_old: False\n* long\\_pair: ukr-ces\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 413 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #cs #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-ces\n\n\n* source group: Ukrainian\n* target group: Czech\n* OPUS readme: ukr-ces\n* model: transformer-align\n* source language(s): ukr\n* target language(s): ces\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.0, chr-F: 0.686### System Info:\n\n\n* hf\\_name: ukr-ces\n* source\\_languages: ukr\n* target\\_languages: ces\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'cs']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'ces'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: ces\n* short\\_pair: uk-cs\n* chrF2\\_score: 0.6859999999999999\n* bleu: 52.0\n* brevity\\_penalty: 0.993\n* ref\\_len: 8550.0\n* src\\_name: Ukrainian\n* tgt\\_name: Czech\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: cs\n* prefer\\_old: False\n* long\\_pair: ukr-ces\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ukr-deu * source group: Ukrainian * target group: German * OPUS readme: [ukr-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-deu/README.md) * model: transformer-align * source language(s): ukr * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.deu | 48.2 | 0.661 | ### System Info: - hf_name: ukr-deu - source_languages: ukr - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'de'] - src_constituents: {'ukr'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-deu/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: deu - short_pair: uk-de - chrF2_score: 0.6609999999999999 - bleu: 48.2 - brevity_penalty: 0.98 - ref_len: 62298.0 - src_name: Ukrainian - tgt_name: German - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: de - prefer_old: False - long_pair: ukr-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "de"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "de" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-deu * source group: Ukrainian * target group: German * OPUS readme: ukr-deu * model: transformer-align * source language(s): ukr * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 48.2, chr-F: 0.661 ### System Info: * hf\_name: ukr-deu * source\_languages: ukr * target\_languages: deu * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'de'] * src\_constituents: {'ukr'} * tgt\_constituents: {'deu'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: deu * short\_pair: uk-de * chrF2\_score: 0.6609999999999999 * bleu: 48.2 * brevity\_penalty: 0.98 * ref\_len: 62298.0 * src\_name: Ukrainian * tgt\_name: German * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: de * prefer\_old: False * long\_pair: ukr-deu * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-deu\n\n\n* source group: Ukrainian\n* target group: German\n* OPUS readme: ukr-deu\n* model: transformer-align\n* source language(s): ukr\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.2, chr-F: 0.661", "### System Info:\n\n\n* hf\\_name: ukr-deu\n* source\\_languages: ukr\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'de']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: deu\n* short\\_pair: uk-de\n* chrF2\\_score: 0.6609999999999999\n* bleu: 48.2\n* brevity\\_penalty: 0.98\n* ref\\_len: 62298.0\n* src\\_name: Ukrainian\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: ukr-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-deu\n\n\n* source group: Ukrainian\n* target group: German\n* OPUS readme: ukr-deu\n* model: transformer-align\n* source language(s): ukr\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.2, chr-F: 0.661", "### System Info:\n\n\n* hf\\_name: ukr-deu\n* source\\_languages: ukr\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'de']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: deu\n* short\\_pair: uk-de\n* chrF2\\_score: 0.6609999999999999\n* bleu: 48.2\n* brevity\\_penalty: 0.98\n* ref\\_len: 62298.0\n* src\\_name: Ukrainian\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: ukr-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 413 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-deu\n\n\n* source group: Ukrainian\n* target group: German\n* OPUS readme: ukr-deu\n* model: transformer-align\n* source language(s): ukr\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.2, chr-F: 0.661### System Info:\n\n\n* hf\\_name: ukr-deu\n* source\\_languages: ukr\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'de']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: deu\n* short\\_pair: uk-de\n* chrF2\\_score: 0.6609999999999999\n* bleu: 48.2\n* brevity\\_penalty: 0.98\n* ref\\_len: 62298.0\n* src\\_name: Ukrainian\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: ukr-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-uk-en * source languages: uk * target languages: en * OPUS readme: [uk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.uk.en | 64.1 | 0.757 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-uk-en * source languages: uk * target languages: en * OPUS readme: uk-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 64.1, chr-F: 0.757
[ "### opus-mt-uk-en\n\n\n* source languages: uk\n* target languages: en\n* OPUS readme: uk-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 64.1, chr-F: 0.757" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-uk-en\n\n\n* source languages: uk\n* target languages: en\n* OPUS readme: uk-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 64.1, chr-F: 0.757" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-uk-en\n\n\n* source languages: uk\n* target languages: en\n* OPUS readme: uk-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 64.1, chr-F: 0.757" ]
translation
transformers
### opus-mt-uk-es * source languages: uk * target languages: es * OPUS readme: [uk-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.uk.es | 50.4 | 0.680 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-uk-es * source languages: uk * target languages: es * OPUS readme: uk-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 50.4, chr-F: 0.680
[ "### opus-mt-uk-es\n\n\n* source languages: uk\n* target languages: es\n* OPUS readme: uk-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.4, chr-F: 0.680" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-uk-es\n\n\n* source languages: uk\n* target languages: es\n* OPUS readme: uk-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.4, chr-F: 0.680" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-uk-es\n\n\n* source languages: uk\n* target languages: es\n* OPUS readme: uk-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.4, chr-F: 0.680" ]
translation
transformers
### opus-mt-uk-fi * source languages: uk * target languages: fi * OPUS readme: [uk-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-fi/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-fi/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-fi/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.uk.fi | 24.4 | 0.490 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-fi
null
[ "transformers", "pytorch", "tf", "safetensors", "marian", "text2text-generation", "translation", "uk", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #uk #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-uk-fi * source languages: uk * target languages: fi * OPUS readme: uk-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.4, chr-F: 0.490
[ "### opus-mt-uk-fi\n\n\n* source languages: uk\n* target languages: fi\n* OPUS readme: uk-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.4, chr-F: 0.490" ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #uk #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-uk-fi\n\n\n* source languages: uk\n* target languages: fi\n* OPUS readme: uk-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.4, chr-F: 0.490" ]
[ 55, 105 ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #marian #text2text-generation #translation #uk #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-uk-fi\n\n\n* source languages: uk\n* target languages: fi\n* OPUS readme: uk-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.4, chr-F: 0.490" ]
translation
transformers
### opus-mt-uk-fr * source languages: uk * target languages: fr * OPUS readme: [uk-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.uk.fr | 52.1 | 0.681 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-uk-fr * source languages: uk * target languages: fr * OPUS readme: uk-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 52.1, chr-F: 0.681
[ "### opus-mt-uk-fr\n\n\n* source languages: uk\n* target languages: fr\n* OPUS readme: uk-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.1, chr-F: 0.681" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-uk-fr\n\n\n* source languages: uk\n* target languages: fr\n* OPUS readme: uk-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.1, chr-F: 0.681" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-uk-fr\n\n\n* source languages: uk\n* target languages: fr\n* OPUS readme: uk-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.1, chr-F: 0.681" ]
translation
transformers
### ukr-heb * source group: Ukrainian * target group: Hebrew * OPUS readme: [ukr-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-heb/README.md) * model: transformer-align * source language(s): ukr * target language(s): heb * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.heb | 35.7 | 0.557 | ### System Info: - hf_name: ukr-heb - source_languages: ukr - target_languages: heb - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-heb/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'he'] - src_constituents: {'ukr'} - tgt_constituents: {'heb'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-heb/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: heb - short_pair: uk-he - chrF2_score: 0.557 - bleu: 35.7 - brevity_penalty: 1.0 - ref_len: 4765.0 - src_name: Ukrainian - tgt_name: Hebrew - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: he - prefer_old: False - long_pair: ukr-heb - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "he"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-he
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "he", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "he" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-heb * source group: Ukrainian * target group: Hebrew * OPUS readme: ukr-heb * model: transformer-align * source language(s): ukr * target language(s): heb * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 35.7, chr-F: 0.557 ### System Info: * hf\_name: ukr-heb * source\_languages: ukr * target\_languages: heb * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'he'] * src\_constituents: {'ukr'} * tgt\_constituents: {'heb'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: heb * short\_pair: uk-he * chrF2\_score: 0.557 * bleu: 35.7 * brevity\_penalty: 1.0 * ref\_len: 4765.0 * src\_name: Ukrainian * tgt\_name: Hebrew * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: he * prefer\_old: False * long\_pair: ukr-heb * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-heb\n\n\n* source group: Ukrainian\n* target group: Hebrew\n* OPUS readme: ukr-heb\n* model: transformer-align\n* source language(s): ukr\n* target language(s): heb\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.7, chr-F: 0.557", "### System Info:\n\n\n* hf\\_name: ukr-heb\n* source\\_languages: ukr\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'he']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: heb\n* short\\_pair: uk-he\n* chrF2\\_score: 0.557\n* bleu: 35.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 4765.0\n* src\\_name: Ukrainian\n* tgt\\_name: Hebrew\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: ukr-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-heb\n\n\n* source group: Ukrainian\n* target group: Hebrew\n* OPUS readme: ukr-heb\n* model: transformer-align\n* source language(s): ukr\n* target language(s): heb\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.7, chr-F: 0.557", "### System Info:\n\n\n* hf\\_name: ukr-heb\n* source\\_languages: ukr\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'he']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: heb\n* short\\_pair: uk-he\n* chrF2\\_score: 0.557\n* bleu: 35.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 4765.0\n* src\\_name: Ukrainian\n* tgt\\_name: Hebrew\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: ukr-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 400 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-heb\n\n\n* source group: Ukrainian\n* target group: Hebrew\n* OPUS readme: ukr-heb\n* model: transformer-align\n* source language(s): ukr\n* target language(s): heb\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.7, chr-F: 0.557### System Info:\n\n\n* hf\\_name: ukr-heb\n* source\\_languages: ukr\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'he']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: heb\n* short\\_pair: uk-he\n* chrF2\\_score: 0.557\n* bleu: 35.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 4765.0\n* src\\_name: Ukrainian\n* tgt\\_name: Hebrew\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: ukr-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ukr-hun * source group: Ukrainian * target group: Hungarian * OPUS readme: [ukr-hun](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hun/README.md) * model: transformer-align * source language(s): ukr * target language(s): hun * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.hun | 41.4 | 0.649 | ### System Info: - hf_name: ukr-hun - source_languages: ukr - target_languages: hun - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hun/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'hu'] - src_constituents: {'ukr'} - tgt_constituents: {'hun'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: hun - short_pair: uk-hu - chrF2_score: 0.649 - bleu: 41.4 - brevity_penalty: 0.9740000000000001 - ref_len: 2433.0 - src_name: Ukrainian - tgt_name: Hungarian - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: hu - prefer_old: False - long_pair: ukr-hun - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "hu"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-hu
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "hu" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-hun * source group: Ukrainian * target group: Hungarian * OPUS readme: ukr-hun * model: transformer-align * source language(s): ukr * target language(s): hun * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 41.4, chr-F: 0.649 ### System Info: * hf\_name: ukr-hun * source\_languages: ukr * target\_languages: hun * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'hu'] * src\_constituents: {'ukr'} * tgt\_constituents: {'hun'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: hun * short\_pair: uk-hu * chrF2\_score: 0.649 * bleu: 41.4 * brevity\_penalty: 0.9740000000000001 * ref\_len: 2433.0 * src\_name: Ukrainian * tgt\_name: Hungarian * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: hu * prefer\_old: False * long\_pair: ukr-hun * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-hun\n\n\n* source group: Ukrainian\n* target group: Hungarian\n* OPUS readme: ukr-hun\n* model: transformer-align\n* source language(s): ukr\n* target language(s): hun\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.4, chr-F: 0.649", "### System Info:\n\n\n* hf\\_name: ukr-hun\n* source\\_languages: ukr\n* target\\_languages: hun\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'hu']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'hun'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: hun\n* short\\_pair: uk-hu\n* chrF2\\_score: 0.649\n* bleu: 41.4\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 2433.0\n* src\\_name: Ukrainian\n* tgt\\_name: Hungarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: hu\n* prefer\\_old: False\n* long\\_pair: ukr-hun\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-hun\n\n\n* source group: Ukrainian\n* target group: Hungarian\n* OPUS readme: ukr-hun\n* model: transformer-align\n* source language(s): ukr\n* target language(s): hun\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.4, chr-F: 0.649", "### System Info:\n\n\n* hf\\_name: ukr-hun\n* source\\_languages: ukr\n* target\\_languages: hun\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'hu']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'hun'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: hun\n* short\\_pair: uk-hu\n* chrF2\\_score: 0.649\n* bleu: 41.4\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 2433.0\n* src\\_name: Ukrainian\n* tgt\\_name: Hungarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: hu\n* prefer\\_old: False\n* long\\_pair: ukr-hun\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 407 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-hun\n\n\n* source group: Ukrainian\n* target group: Hungarian\n* OPUS readme: ukr-hun\n* model: transformer-align\n* source language(s): ukr\n* target language(s): hun\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.4, chr-F: 0.649### System Info:\n\n\n* hf\\_name: ukr-hun\n* source\\_languages: ukr\n* target\\_languages: hun\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'hu']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'hun'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: hun\n* short\\_pair: uk-hu\n* chrF2\\_score: 0.649\n* bleu: 41.4\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 2433.0\n* src\\_name: Ukrainian\n* tgt\\_name: Hungarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: hu\n* prefer\\_old: False\n* long\\_pair: ukr-hun\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ukr-ita * source group: Ukrainian * target group: Italian * OPUS readme: [ukr-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ita/README.md) * model: transformer-align * source language(s): ukr * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.ita | 46.0 | 0.662 | ### System Info: - hf_name: ukr-ita - source_languages: ukr - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'it'] - src_constituents: {'ukr'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-ita/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: ita - short_pair: uk-it - chrF2_score: 0.662 - bleu: 46.0 - brevity_penalty: 0.9490000000000001 - ref_len: 27846.0 - src_name: Ukrainian - tgt_name: Italian - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: it - prefer_old: False - long_pair: ukr-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "it"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-it
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "it" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-ita * source group: Ukrainian * target group: Italian * OPUS readme: ukr-ita * model: transformer-align * source language(s): ukr * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 46.0, chr-F: 0.662 ### System Info: * hf\_name: ukr-ita * source\_languages: ukr * target\_languages: ita * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'it'] * src\_constituents: {'ukr'} * tgt\_constituents: {'ita'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: ita * short\_pair: uk-it * chrF2\_score: 0.662 * bleu: 46.0 * brevity\_penalty: 0.9490000000000001 * ref\_len: 27846.0 * src\_name: Ukrainian * tgt\_name: Italian * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: it * prefer\_old: False * long\_pair: ukr-ita * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-ita\n\n\n* source group: Ukrainian\n* target group: Italian\n* OPUS readme: ukr-ita\n* model: transformer-align\n* source language(s): ukr\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.0, chr-F: 0.662", "### System Info:\n\n\n* hf\\_name: ukr-ita\n* source\\_languages: ukr\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'it']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: ita\n* short\\_pair: uk-it\n* chrF2\\_score: 0.662\n* bleu: 46.0\n* brevity\\_penalty: 0.9490000000000001\n* ref\\_len: 27846.0\n* src\\_name: Ukrainian\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: ukr-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-ita\n\n\n* source group: Ukrainian\n* target group: Italian\n* OPUS readme: ukr-ita\n* model: transformer-align\n* source language(s): ukr\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.0, chr-F: 0.662", "### System Info:\n\n\n* hf\\_name: ukr-ita\n* source\\_languages: ukr\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'it']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: ita\n* short\\_pair: uk-it\n* chrF2\\_score: 0.662\n* bleu: 46.0\n* brevity\\_penalty: 0.9490000000000001\n* ref\\_len: 27846.0\n* src\\_name: Ukrainian\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: ukr-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 407 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-ita\n\n\n* source group: Ukrainian\n* target group: Italian\n* OPUS readme: ukr-ita\n* model: transformer-align\n* source language(s): ukr\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 46.0, chr-F: 0.662### System Info:\n\n\n* hf\\_name: ukr-ita\n* source\\_languages: ukr\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'it']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: ita\n* short\\_pair: uk-it\n* chrF2\\_score: 0.662\n* bleu: 46.0\n* brevity\\_penalty: 0.9490000000000001\n* ref\\_len: 27846.0\n* src\\_name: Ukrainian\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: ukr-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ukr-nld * source group: Ukrainian * target group: Dutch * OPUS readme: [ukr-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nld/README.md) * model: transformer-align * source language(s): ukr * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.nld | 48.7 | 0.656 | ### System Info: - hf_name: ukr-nld - source_languages: ukr - target_languages: nld - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nld/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'nl'] - src_constituents: {'ukr'} - tgt_constituents: {'nld'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nld/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: nld - short_pair: uk-nl - chrF2_score: 0.6559999999999999 - bleu: 48.7 - brevity_penalty: 0.985 - ref_len: 59943.0 - src_name: Ukrainian - tgt_name: Dutch - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: nl - prefer_old: False - long_pair: ukr-nld - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "nl"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-nl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "nl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "nl" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-nld * source group: Ukrainian * target group: Dutch * OPUS readme: ukr-nld * model: transformer-align * source language(s): ukr * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 48.7, chr-F: 0.656 ### System Info: * hf\_name: ukr-nld * source\_languages: ukr * target\_languages: nld * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'nl'] * src\_constituents: {'ukr'} * tgt\_constituents: {'nld'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: nld * short\_pair: uk-nl * chrF2\_score: 0.6559999999999999 * bleu: 48.7 * brevity\_penalty: 0.985 * ref\_len: 59943.0 * src\_name: Ukrainian * tgt\_name: Dutch * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: nl * prefer\_old: False * long\_pair: ukr-nld * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-nld\n\n\n* source group: Ukrainian\n* target group: Dutch\n* OPUS readme: ukr-nld\n* model: transformer-align\n* source language(s): ukr\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.7, chr-F: 0.656", "### System Info:\n\n\n* hf\\_name: ukr-nld\n* source\\_languages: ukr\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'nl']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: nld\n* short\\_pair: uk-nl\n* chrF2\\_score: 0.6559999999999999\n* bleu: 48.7\n* brevity\\_penalty: 0.985\n* ref\\_len: 59943.0\n* src\\_name: Ukrainian\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: ukr-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-nld\n\n\n* source group: Ukrainian\n* target group: Dutch\n* OPUS readme: ukr-nld\n* model: transformer-align\n* source language(s): ukr\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.7, chr-F: 0.656", "### System Info:\n\n\n* hf\\_name: ukr-nld\n* source\\_languages: ukr\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'nl']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: nld\n* short\\_pair: uk-nl\n* chrF2\\_score: 0.6559999999999999\n* bleu: 48.7\n* brevity\\_penalty: 0.985\n* ref\\_len: 59943.0\n* src\\_name: Ukrainian\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: ukr-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 414 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-nld\n\n\n* source group: Ukrainian\n* target group: Dutch\n* OPUS readme: ukr-nld\n* model: transformer-align\n* source language(s): ukr\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 48.7, chr-F: 0.656### System Info:\n\n\n* hf\\_name: ukr-nld\n* source\\_languages: ukr\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'nl']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: nld\n* short\\_pair: uk-nl\n* chrF2\\_score: 0.6559999999999999\n* bleu: 48.7\n* brevity\\_penalty: 0.985\n* ref\\_len: 59943.0\n* src\\_name: Ukrainian\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: ukr-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ukr-nor * source group: Ukrainian * target group: Norwegian * OPUS readme: [ukr-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nor/README.md) * model: transformer-align * source language(s): ukr * target language(s): nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.nor | 21.3 | 0.397 | ### System Info: - hf_name: ukr-nor - source_languages: ukr - target_languages: nor - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-nor/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'no'] - src_constituents: {'ukr'} - tgt_constituents: {'nob', 'nno'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-nor/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: nor - short_pair: uk-no - chrF2_score: 0.397 - bleu: 21.3 - brevity_penalty: 0.966 - ref_len: 4378.0 - src_name: Ukrainian - tgt_name: Norwegian - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: no - prefer_old: False - long_pair: ukr-nor - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", false], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-no
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "no", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "no" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-nor * source group: Ukrainian * target group: Norwegian * OPUS readme: ukr-nor * model: transformer-align * source language(s): ukr * target language(s): nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 21.3, chr-F: 0.397 ### System Info: * hf\_name: ukr-nor * source\_languages: ukr * target\_languages: nor * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'no'] * src\_constituents: {'ukr'} * tgt\_constituents: {'nob', 'nno'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: nor * short\_pair: uk-no * chrF2\_score: 0.397 * bleu: 21.3 * brevity\_penalty: 0.966 * ref\_len: 4378.0 * src\_name: Ukrainian * tgt\_name: Norwegian * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: no * prefer\_old: False * long\_pair: ukr-nor * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-nor\n\n\n* source group: Ukrainian\n* target group: Norwegian\n* OPUS readme: ukr-nor\n* model: transformer-align\n* source language(s): ukr\n* target language(s): nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.3, chr-F: 0.397", "### System Info:\n\n\n* hf\\_name: ukr-nor\n* source\\_languages: ukr\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'no']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: nor\n* short\\_pair: uk-no\n* chrF2\\_score: 0.397\n* bleu: 21.3\n* brevity\\_penalty: 0.966\n* ref\\_len: 4378.0\n* src\\_name: Ukrainian\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: ukr-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-nor\n\n\n* source group: Ukrainian\n* target group: Norwegian\n* OPUS readme: ukr-nor\n* model: transformer-align\n* source language(s): ukr\n* target language(s): nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.3, chr-F: 0.397", "### System Info:\n\n\n* hf\\_name: ukr-nor\n* source\\_languages: ukr\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'no']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: nor\n* short\\_pair: uk-no\n* chrF2\\_score: 0.397\n* bleu: 21.3\n* brevity\\_penalty: 0.966\n* ref\\_len: 4378.0\n* src\\_name: Ukrainian\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: ukr-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 135, 403 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #no #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-nor\n\n\n* source group: Ukrainian\n* target group: Norwegian\n* OPUS readme: ukr-nor\n* model: transformer-align\n* source language(s): ukr\n* target language(s): nob\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.3, chr-F: 0.397### System Info:\n\n\n* hf\\_name: ukr-nor\n* source\\_languages: ukr\n* target\\_languages: nor\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'no']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'nob', 'nno'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: nor\n* short\\_pair: uk-no\n* chrF2\\_score: 0.397\n* bleu: 21.3\n* brevity\\_penalty: 0.966\n* ref\\_len: 4378.0\n* src\\_name: Ukrainian\n* tgt\\_name: Norwegian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: no\n* prefer\\_old: False\n* long\\_pair: ukr-nor\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ukr-pol * source group: Ukrainian * target group: Polish * OPUS readme: [ukr-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-pol/README.md) * model: transformer-align * source language(s): ukr * target language(s): pol * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-pol/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-pol/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-pol/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.pol | 49.9 | 0.685 | ### System Info: - hf_name: ukr-pol - source_languages: ukr - target_languages: pol - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-pol/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'pl'] - src_constituents: {'ukr'} - tgt_constituents: {'pol'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-pol/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-pol/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: pol - short_pair: uk-pl - chrF2_score: 0.685 - bleu: 49.9 - brevity_penalty: 0.9470000000000001 - ref_len: 13098.0 - src_name: Ukrainian - tgt_name: Polish - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: pl - prefer_old: False - long_pair: ukr-pol - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "pl"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-pl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "pl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "pl" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-pol * source group: Ukrainian * target group: Polish * OPUS readme: ukr-pol * model: transformer-align * source language(s): ukr * target language(s): pol * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 49.9, chr-F: 0.685 ### System Info: * hf\_name: ukr-pol * source\_languages: ukr * target\_languages: pol * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'pl'] * src\_constituents: {'ukr'} * tgt\_constituents: {'pol'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: pol * short\_pair: uk-pl * chrF2\_score: 0.685 * bleu: 49.9 * brevity\_penalty: 0.9470000000000001 * ref\_len: 13098.0 * src\_name: Ukrainian * tgt\_name: Polish * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: pl * prefer\_old: False * long\_pair: ukr-pol * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-pol\n\n\n* source group: Ukrainian\n* target group: Polish\n* OPUS readme: ukr-pol\n* model: transformer-align\n* source language(s): ukr\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.9, chr-F: 0.685", "### System Info:\n\n\n* hf\\_name: ukr-pol\n* source\\_languages: ukr\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'pl']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: pol\n* short\\_pair: uk-pl\n* chrF2\\_score: 0.685\n* bleu: 49.9\n* brevity\\_penalty: 0.9470000000000001\n* ref\\_len: 13098.0\n* src\\_name: Ukrainian\n* tgt\\_name: Polish\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: ukr-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-pol\n\n\n* source group: Ukrainian\n* target group: Polish\n* OPUS readme: ukr-pol\n* model: transformer-align\n* source language(s): ukr\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.9, chr-F: 0.685", "### System Info:\n\n\n* hf\\_name: ukr-pol\n* source\\_languages: ukr\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'pl']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: pol\n* short\\_pair: uk-pl\n* chrF2\\_score: 0.685\n* bleu: 49.9\n* brevity\\_penalty: 0.9470000000000001\n* ref\\_len: 13098.0\n* src\\_name: Ukrainian\n* tgt\\_name: Polish\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: ukr-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 403 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-pol\n\n\n* source group: Ukrainian\n* target group: Polish\n* OPUS readme: ukr-pol\n* model: transformer-align\n* source language(s): ukr\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 49.9, chr-F: 0.685### System Info:\n\n\n* hf\\_name: ukr-pol\n* source\\_languages: ukr\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'pl']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: pol\n* short\\_pair: uk-pl\n* chrF2\\_score: 0.685\n* bleu: 49.9\n* brevity\\_penalty: 0.9470000000000001\n* ref\\_len: 13098.0\n* src\\_name: Ukrainian\n* tgt\\_name: Polish\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: ukr-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ukr-por * source group: Ukrainian * target group: Portuguese * OPUS readme: [ukr-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-por/README.md) * model: transformer-align * source language(s): ukr * target language(s): por * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.por | 38.1 | 0.601 | ### System Info: - hf_name: ukr-por - source_languages: ukr - target_languages: por - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-por/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'pt'] - src_constituents: {'ukr'} - tgt_constituents: {'por'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-por/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: por - short_pair: uk-pt - chrF2_score: 0.601 - bleu: 38.1 - brevity_penalty: 0.981 - ref_len: 21315.0 - src_name: Ukrainian - tgt_name: Portuguese - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: pt - prefer_old: False - long_pair: ukr-por - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "pt"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-pt
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "pt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "pt" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-por * source group: Ukrainian * target group: Portuguese * OPUS readme: ukr-por * model: transformer-align * source language(s): ukr * target language(s): por * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 38.1, chr-F: 0.601 ### System Info: * hf\_name: ukr-por * source\_languages: ukr * target\_languages: por * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'pt'] * src\_constituents: {'ukr'} * tgt\_constituents: {'por'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: por * short\_pair: uk-pt * chrF2\_score: 0.601 * bleu: 38.1 * brevity\_penalty: 0.981 * ref\_len: 21315.0 * src\_name: Ukrainian * tgt\_name: Portuguese * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: pt * prefer\_old: False * long\_pair: ukr-por * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-por\n\n\n* source group: Ukrainian\n* target group: Portuguese\n* OPUS readme: ukr-por\n* model: transformer-align\n* source language(s): ukr\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.1, chr-F: 0.601", "### System Info:\n\n\n* hf\\_name: ukr-por\n* source\\_languages: ukr\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'pt']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: por\n* short\\_pair: uk-pt\n* chrF2\\_score: 0.601\n* bleu: 38.1\n* brevity\\_penalty: 0.981\n* ref\\_len: 21315.0\n* src\\_name: Ukrainian\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: ukr-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-por\n\n\n* source group: Ukrainian\n* target group: Portuguese\n* OPUS readme: ukr-por\n* model: transformer-align\n* source language(s): ukr\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.1, chr-F: 0.601", "### System Info:\n\n\n* hf\\_name: ukr-por\n* source\\_languages: ukr\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'pt']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: por\n* short\\_pair: uk-pt\n* chrF2\\_score: 0.601\n* bleu: 38.1\n* brevity\\_penalty: 0.981\n* ref\\_len: 21315.0\n* src\\_name: Ukrainian\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: ukr-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 396 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-por\n\n\n* source group: Ukrainian\n* target group: Portuguese\n* OPUS readme: ukr-por\n* model: transformer-align\n* source language(s): ukr\n* target language(s): por\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.1, chr-F: 0.601### System Info:\n\n\n* hf\\_name: ukr-por\n* source\\_languages: ukr\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'pt']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: por\n* short\\_pair: uk-pt\n* chrF2\\_score: 0.601\n* bleu: 38.1\n* brevity\\_penalty: 0.981\n* ref\\_len: 21315.0\n* src\\_name: Ukrainian\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: ukr-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ukr-rus * source group: Ukrainian * target group: Russian * OPUS readme: [ukr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-rus/README.md) * model: transformer-align * source language(s): ukr * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.rus | 69.2 | 0.826 | ### System Info: - hf_name: ukr-rus - source_languages: ukr - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'ru'] - src_constituents: {'ukr'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-rus/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: rus - short_pair: uk-ru - chrF2_score: 0.826 - bleu: 69.2 - brevity_penalty: 0.992 - ref_len: 60387.0 - src_name: Ukrainian - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: ru - prefer_old: False - long_pair: ukr-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "ru"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-ru
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "ru" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-rus * source group: Ukrainian * target group: Russian * OPUS readme: ukr-rus * model: transformer-align * source language(s): ukr * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 69.2, chr-F: 0.826 ### System Info: * hf\_name: ukr-rus * source\_languages: ukr * target\_languages: rus * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'ru'] * src\_constituents: {'ukr'} * tgt\_constituents: {'rus'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: rus * short\_pair: uk-ru * chrF2\_score: 0.826 * bleu: 69.2 * brevity\_penalty: 0.992 * ref\_len: 60387.0 * src\_name: Ukrainian * tgt\_name: Russian * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: ru * prefer\_old: False * long\_pair: ukr-rus * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-rus\n\n\n* source group: Ukrainian\n* target group: Russian\n* OPUS readme: ukr-rus\n* model: transformer-align\n* source language(s): ukr\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 69.2, chr-F: 0.826", "### System Info:\n\n\n* hf\\_name: ukr-rus\n* source\\_languages: ukr\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'ru']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: rus\n* short\\_pair: uk-ru\n* chrF2\\_score: 0.826\n* bleu: 69.2\n* brevity\\_penalty: 0.992\n* ref\\_len: 60387.0\n* src\\_name: Ukrainian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: ukr-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-rus\n\n\n* source group: Ukrainian\n* target group: Russian\n* OPUS readme: ukr-rus\n* model: transformer-align\n* source language(s): ukr\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 69.2, chr-F: 0.826", "### System Info:\n\n\n* hf\\_name: ukr-rus\n* source\\_languages: ukr\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'ru']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: rus\n* short\\_pair: uk-ru\n* chrF2\\_score: 0.826\n* bleu: 69.2\n* brevity\\_penalty: 0.992\n* ref\\_len: 60387.0\n* src\\_name: Ukrainian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: ukr-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 397 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-rus\n\n\n* source group: Ukrainian\n* target group: Russian\n* OPUS readme: ukr-rus\n* model: transformer-align\n* source language(s): ukr\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 69.2, chr-F: 0.826### System Info:\n\n\n* hf\\_name: ukr-rus\n* source\\_languages: ukr\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'ru']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: rus\n* short\\_pair: uk-ru\n* chrF2\\_score: 0.826\n* bleu: 69.2\n* brevity\\_penalty: 0.992\n* ref\\_len: 60387.0\n* src\\_name: Ukrainian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: ukr-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ukr-hbs * source group: Ukrainian * target group: Serbo-Croatian * OPUS readme: [ukr-hbs](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hbs/README.md) * model: transformer-align * source language(s): ukr * target language(s): hrv srp_Cyrl srp_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.hbs | 42.8 | 0.631 | ### System Info: - hf_name: ukr-hbs - source_languages: ukr - target_languages: hbs - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hbs/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'sh'] - src_constituents: {'ukr'} - tgt_constituents: {'hrv', 'srp_Cyrl', 'bos_Latn', 'srp_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hbs/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: hbs - short_pair: uk-sh - chrF2_score: 0.631 - bleu: 42.8 - brevity_penalty: 0.96 - ref_len: 5128.0 - src_name: Ukrainian - tgt_name: Serbo-Croatian - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: sh - prefer_old: False - long_pair: ukr-hbs - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "sh"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-sh
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "sh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "sh" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #sh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-hbs * source group: Ukrainian * target group: Serbo-Croatian * OPUS readme: ukr-hbs * model: transformer-align * source language(s): ukr * target language(s): hrv srp\_Cyrl srp\_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 42.8, chr-F: 0.631 ### System Info: * hf\_name: ukr-hbs * source\_languages: ukr * target\_languages: hbs * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'sh'] * src\_constituents: {'ukr'} * tgt\_constituents: {'hrv', 'srp\_Cyrl', 'bos\_Latn', 'srp\_Latn'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: hbs * short\_pair: uk-sh * chrF2\_score: 0.631 * bleu: 42.8 * brevity\_penalty: 0.96 * ref\_len: 5128.0 * src\_name: Ukrainian * tgt\_name: Serbo-Croatian * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: sh * prefer\_old: False * long\_pair: ukr-hbs * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-hbs\n\n\n* source group: Ukrainian\n* target group: Serbo-Croatian\n* OPUS readme: ukr-hbs\n* model: transformer-align\n* source language(s): ukr\n* target language(s): hrv srp\\_Cyrl srp\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.8, chr-F: 0.631", "### System Info:\n\n\n* hf\\_name: ukr-hbs\n* source\\_languages: ukr\n* target\\_languages: hbs\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'sh']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: hbs\n* short\\_pair: uk-sh\n* chrF2\\_score: 0.631\n* bleu: 42.8\n* brevity\\_penalty: 0.96\n* ref\\_len: 5128.0\n* src\\_name: Ukrainian\n* tgt\\_name: Serbo-Croatian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: sh\n* prefer\\_old: False\n* long\\_pair: ukr-hbs\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #sh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-hbs\n\n\n* source group: Ukrainian\n* target group: Serbo-Croatian\n* OPUS readme: ukr-hbs\n* model: transformer-align\n* source language(s): ukr\n* target language(s): hrv srp\\_Cyrl srp\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.8, chr-F: 0.631", "### System Info:\n\n\n* hf\\_name: ukr-hbs\n* source\\_languages: ukr\n* target\\_languages: hbs\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'sh']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: hbs\n* short\\_pair: uk-sh\n* chrF2\\_score: 0.631\n* bleu: 42.8\n* brevity\\_penalty: 0.96\n* ref\\_len: 5128.0\n* src\\_name: Ukrainian\n* tgt\\_name: Serbo-Croatian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: sh\n* prefer\\_old: False\n* long\\_pair: ukr-hbs\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 180, 432 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #sh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-hbs\n\n\n* source group: Ukrainian\n* target group: Serbo-Croatian\n* OPUS readme: ukr-hbs\n* model: transformer-align\n* source language(s): ukr\n* target language(s): hrv srp\\_Cyrl srp\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.8, chr-F: 0.631### System Info:\n\n\n* hf\\_name: ukr-hbs\n* source\\_languages: ukr\n* target\\_languages: hbs\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'sh']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: hbs\n* short\\_pair: uk-sh\n* chrF2\\_score: 0.631\n* bleu: 42.8\n* brevity\\_penalty: 0.96\n* ref\\_len: 5128.0\n* src\\_name: Ukrainian\n* tgt\\_name: Serbo-Croatian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: sh\n* prefer\\_old: False\n* long\\_pair: ukr-hbs\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ukr-slv * source group: Ukrainian * target group: Slovenian * OPUS readme: [ukr-slv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-slv/README.md) * model: transformer-align * source language(s): ukr * target language(s): slv * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.slv | 11.8 | 0.280 | ### System Info: - hf_name: ukr-slv - source_languages: ukr - target_languages: slv - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-slv/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'sl'] - src_constituents: {'ukr'} - tgt_constituents: {'slv'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-slv/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: slv - short_pair: uk-sl - chrF2_score: 0.28 - bleu: 11.8 - brevity_penalty: 1.0 - ref_len: 3823.0 - src_name: Ukrainian - tgt_name: Slovenian - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: sl - prefer_old: False - long_pair: ukr-slv - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "sl"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-sl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "sl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "sl" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #sl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-slv * source group: Ukrainian * target group: Slovenian * OPUS readme: ukr-slv * model: transformer-align * source language(s): ukr * target language(s): slv * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 11.8, chr-F: 0.280 ### System Info: * hf\_name: ukr-slv * source\_languages: ukr * target\_languages: slv * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'sl'] * src\_constituents: {'ukr'} * tgt\_constituents: {'slv'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: slv * short\_pair: uk-sl * chrF2\_score: 0.28 * bleu: 11.8 * brevity\_penalty: 1.0 * ref\_len: 3823.0 * src\_name: Ukrainian * tgt\_name: Slovenian * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: sl * prefer\_old: False * long\_pair: ukr-slv * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-slv\n\n\n* source group: Ukrainian\n* target group: Slovenian\n* OPUS readme: ukr-slv\n* model: transformer-align\n* source language(s): ukr\n* target language(s): slv\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.8, chr-F: 0.280", "### System Info:\n\n\n* hf\\_name: ukr-slv\n* source\\_languages: ukr\n* target\\_languages: slv\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'sl']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'slv'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: slv\n* short\\_pair: uk-sl\n* chrF2\\_score: 0.28\n* bleu: 11.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 3823.0\n* src\\_name: Ukrainian\n* tgt\\_name: Slovenian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: sl\n* prefer\\_old: False\n* long\\_pair: ukr-slv\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #sl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-slv\n\n\n* source group: Ukrainian\n* target group: Slovenian\n* OPUS readme: ukr-slv\n* model: transformer-align\n* source language(s): ukr\n* target language(s): slv\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.8, chr-F: 0.280", "### System Info:\n\n\n* hf\\_name: ukr-slv\n* source\\_languages: ukr\n* target\\_languages: slv\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'sl']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'slv'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: slv\n* short\\_pair: uk-sl\n* chrF2\\_score: 0.28\n* bleu: 11.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 3823.0\n* src\\_name: Ukrainian\n* tgt\\_name: Slovenian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: sl\n* prefer\\_old: False\n* long\\_pair: ukr-slv\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 136, 399 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #sl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-slv\n\n\n* source group: Ukrainian\n* target group: Slovenian\n* OPUS readme: ukr-slv\n* model: transformer-align\n* source language(s): ukr\n* target language(s): slv\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.8, chr-F: 0.280### System Info:\n\n\n* hf\\_name: ukr-slv\n* source\\_languages: ukr\n* target\\_languages: slv\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'sl']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'slv'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: slv\n* short\\_pair: uk-sl\n* chrF2\\_score: 0.28\n* bleu: 11.8\n* brevity\\_penalty: 1.0\n* ref\\_len: 3823.0\n* src\\_name: Ukrainian\n* tgt\\_name: Slovenian\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: sl\n* prefer\\_old: False\n* long\\_pair: ukr-slv\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-uk-sv * source languages: uk * target languages: sv * OPUS readme: [uk-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/uk-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/uk-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/uk-sv/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.uk.sv | 27.8 | 0.474 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-uk-sv * source languages: uk * target languages: sv * OPUS readme: uk-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.8, chr-F: 0.474
[ "### opus-mt-uk-sv\n\n\n* source languages: uk\n* target languages: sv\n* OPUS readme: uk-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.474" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-uk-sv\n\n\n* source languages: uk\n* target languages: sv\n* OPUS readme: uk-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.474" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-uk-sv\n\n\n* source languages: uk\n* target languages: sv\n* OPUS readme: uk-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.8, chr-F: 0.474" ]
translation
transformers
### ukr-tur * source group: Ukrainian * target group: Turkish * OPUS readme: [ukr-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-tur/README.md) * model: transformer-align * source language(s): ukr * target language(s): tur * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ukr.tur | 39.3 | 0.655 | ### System Info: - hf_name: ukr-tur - source_languages: ukr - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['uk', 'tr'] - src_constituents: {'ukr'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.test.txt - src_alpha3: ukr - tgt_alpha3: tur - short_pair: uk-tr - chrF2_score: 0.655 - bleu: 39.3 - brevity_penalty: 0.934 - ref_len: 11844.0 - src_name: Ukrainian - tgt_name: Turkish - train_date: 2020-06-17 - src_alpha2: uk - tgt_alpha2: tr - prefer_old: False - long_pair: ukr-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["uk", "tr"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-uk-tr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "uk", "tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "uk", "tr" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #uk #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ukr-tur * source group: Ukrainian * target group: Turkish * OPUS readme: ukr-tur * model: transformer-align * source language(s): ukr * target language(s): tur * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 39.3, chr-F: 0.655 ### System Info: * hf\_name: ukr-tur * source\_languages: ukr * target\_languages: tur * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['uk', 'tr'] * src\_constituents: {'ukr'} * tgt\_constituents: {'tur'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ukr * tgt\_alpha3: tur * short\_pair: uk-tr * chrF2\_score: 0.655 * bleu: 39.3 * brevity\_penalty: 0.934 * ref\_len: 11844.0 * src\_name: Ukrainian * tgt\_name: Turkish * train\_date: 2020-06-17 * src\_alpha2: uk * tgt\_alpha2: tr * prefer\_old: False * long\_pair: ukr-tur * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ukr-tur\n\n\n* source group: Ukrainian\n* target group: Turkish\n* OPUS readme: ukr-tur\n* model: transformer-align\n* source language(s): ukr\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.3, chr-F: 0.655", "### System Info:\n\n\n* hf\\_name: ukr-tur\n* source\\_languages: ukr\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'tr']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: tur\n* short\\_pair: uk-tr\n* chrF2\\_score: 0.655\n* bleu: 39.3\n* brevity\\_penalty: 0.934\n* ref\\_len: 11844.0\n* src\\_name: Ukrainian\n* tgt\\_name: Turkish\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: ukr-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ukr-tur\n\n\n* source group: Ukrainian\n* target group: Turkish\n* OPUS readme: ukr-tur\n* model: transformer-align\n* source language(s): ukr\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.3, chr-F: 0.655", "### System Info:\n\n\n* hf\\_name: ukr-tur\n* source\\_languages: ukr\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'tr']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: tur\n* short\\_pair: uk-tr\n* chrF2\\_score: 0.655\n* bleu: 39.3\n* brevity\\_penalty: 0.934\n* ref\\_len: 11844.0\n* src\\_name: Ukrainian\n* tgt\\_name: Turkish\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: ukr-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 401 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #uk #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ukr-tur\n\n\n* source group: Ukrainian\n* target group: Turkish\n* OPUS readme: ukr-tur\n* model: transformer-align\n* source language(s): ukr\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 39.3, chr-F: 0.655### System Info:\n\n\n* hf\\_name: ukr-tur\n* source\\_languages: ukr\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['uk', 'tr']\n* src\\_constituents: {'ukr'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ukr\n* tgt\\_alpha3: tur\n* short\\_pair: uk-tr\n* chrF2\\_score: 0.655\n* bleu: 39.3\n* brevity\\_penalty: 0.934\n* ref\\_len: 11844.0\n* src\\_name: Ukrainian\n* tgt\\_name: Turkish\n* train\\_date: 2020-06-17\n* src\\_alpha2: uk\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: ukr-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-umb-en * source languages: umb * target languages: en * OPUS readme: [umb-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/umb-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/umb-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/umb-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/umb-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.umb.en | 27.5 | 0.425 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-umb-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "umb", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #umb #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-umb-en * source languages: umb * target languages: en * OPUS readme: umb-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.5, chr-F: 0.425
[ "### opus-mt-umb-en\n\n\n* source languages: umb\n* target languages: en\n* OPUS readme: umb-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.425" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #umb #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-umb-en\n\n\n* source languages: umb\n* target languages: en\n* OPUS readme: umb-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.425" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #umb #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-umb-en\n\n\n* source languages: umb\n* target languages: en\n* OPUS readme: umb-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.425" ]
translation
transformers
### urd-eng * source group: Urdu * target group: English * OPUS readme: [urd-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urd-eng/README.md) * model: transformer-align * source language(s): urd * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.urd.eng | 23.2 | 0.435 | ### System Info: - hf_name: urd-eng - source_languages: urd - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urd-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ur', 'en'] - src_constituents: {'urd'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.test.txt - src_alpha3: urd - tgt_alpha3: eng - short_pair: ur-en - chrF2_score: 0.435 - bleu: 23.2 - brevity_penalty: 0.975 - ref_len: 12029.0 - src_name: Urdu - tgt_name: English - train_date: 2020-06-17 - src_alpha2: ur - tgt_alpha2: en - prefer_old: False - long_pair: urd-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ur", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ur-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ur", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ur", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ur #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### urd-eng * source group: Urdu * target group: English * OPUS readme: urd-eng * model: transformer-align * source language(s): urd * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.2, chr-F: 0.435 ### System Info: * hf\_name: urd-eng * source\_languages: urd * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ur', 'en'] * src\_constituents: {'urd'} * tgt\_constituents: {'eng'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: urd * tgt\_alpha3: eng * short\_pair: ur-en * chrF2\_score: 0.435 * bleu: 23.2 * brevity\_penalty: 0.975 * ref\_len: 12029.0 * src\_name: Urdu * tgt\_name: English * train\_date: 2020-06-17 * src\_alpha2: ur * tgt\_alpha2: en * prefer\_old: False * long\_pair: urd-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### urd-eng\n\n\n* source group: Urdu\n* target group: English\n* OPUS readme: urd-eng\n* model: transformer-align\n* source language(s): urd\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.435", "### System Info:\n\n\n* hf\\_name: urd-eng\n* source\\_languages: urd\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ur', 'en']\n* src\\_constituents: {'urd'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: urd\n* tgt\\_alpha3: eng\n* short\\_pair: ur-en\n* chrF2\\_score: 0.435\n* bleu: 23.2\n* brevity\\_penalty: 0.975\n* ref\\_len: 12029.0\n* src\\_name: Urdu\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: ur\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: urd-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ur #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### urd-eng\n\n\n* source group: Urdu\n* target group: English\n* OPUS readme: urd-eng\n* model: transformer-align\n* source language(s): urd\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.435", "### System Info:\n\n\n* hf\\_name: urd-eng\n* source\\_languages: urd\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ur', 'en']\n* src\\_constituents: {'urd'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: urd\n* tgt\\_alpha3: eng\n* short\\_pair: ur-en\n* chrF2\\_score: 0.435\n* bleu: 23.2\n* brevity\\_penalty: 0.975\n* ref\\_len: 12029.0\n* src\\_name: Urdu\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: ur\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: urd-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 133, 395 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ur #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### urd-eng\n\n\n* source group: Urdu\n* target group: English\n* OPUS readme: urd-eng\n* model: transformer-align\n* source language(s): urd\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.435### System Info:\n\n\n* hf\\_name: urd-eng\n* source\\_languages: urd\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ur', 'en']\n* src\\_constituents: {'urd'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: urd\n* tgt\\_alpha3: eng\n* short\\_pair: ur-en\n* chrF2\\_score: 0.435\n* bleu: 23.2\n* brevity\\_penalty: 0.975\n* ref\\_len: 12029.0\n* src\\_name: Urdu\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: ur\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: urd-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### urj-eng * source group: Uralic languages * target group: English * OPUS readme: [urj-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-eng/README.md) * model: transformer * source language(s): est fin fkv_Latn hun izh kpv krl liv_Latn mdf mhr myv sma sme udm vro * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2015-enfi-fineng.fin.eng | 22.7 | 0.511 | | newsdev2018-enet-esteng.est.eng | 26.6 | 0.545 | | newssyscomb2009-huneng.hun.eng | 21.3 | 0.493 | | newstest2009-huneng.hun.eng | 20.1 | 0.487 | | newstest2015-enfi-fineng.fin.eng | 23.9 | 0.521 | | newstest2016-enfi-fineng.fin.eng | 25.8 | 0.542 | | newstest2017-enfi-fineng.fin.eng | 28.9 | 0.562 | | newstest2018-enet-esteng.est.eng | 27.0 | 0.552 | | newstest2018-enfi-fineng.fin.eng | 21.2 | 0.492 | | newstest2019-fien-fineng.fin.eng | 25.3 | 0.531 | | newstestB2016-enfi-fineng.fin.eng | 21.3 | 0.500 | | newstestB2017-enfi-fineng.fin.eng | 24.4 | 0.528 | | newstestB2017-fien-fineng.fin.eng | 24.4 | 0.528 | | Tatoeba-test.chm-eng.chm.eng | 0.8 | 0.131 | | Tatoeba-test.est-eng.est.eng | 34.5 | 0.526 | | Tatoeba-test.fin-eng.fin.eng | 28.1 | 0.485 | | Tatoeba-test.fkv-eng.fkv.eng | 6.8 | 0.335 | | Tatoeba-test.hun-eng.hun.eng | 25.1 | 0.452 | | Tatoeba-test.izh-eng.izh.eng | 11.6 | 0.224 | | Tatoeba-test.kom-eng.kom.eng | 2.4 | 0.110 | | Tatoeba-test.krl-eng.krl.eng | 18.6 | 0.365 | | Tatoeba-test.liv-eng.liv.eng | 0.5 | 0.078 | | Tatoeba-test.mdf-eng.mdf.eng | 1.5 | 0.117 | | Tatoeba-test.multi.eng | 47.8 | 0.646 | | Tatoeba-test.myv-eng.myv.eng | 0.5 | 0.101 | | Tatoeba-test.sma-eng.sma.eng | 1.2 | 0.110 | | Tatoeba-test.sme-eng.sme.eng | 1.5 | 0.147 | | Tatoeba-test.udm-eng.udm.eng | 1.0 | 0.130 | ### System Info: - hf_name: urj-eng - source_languages: urj - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['se', 'fi', 'hu', 'et', 'urj', 'en'] - src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-eng/opus2m-2020-08-01.test.txt - src_alpha3: urj - tgt_alpha3: eng - short_pair: urj-en - chrF2_score: 0.6459999999999999 - bleu: 47.8 - brevity_penalty: 0.993 - ref_len: 70882.0 - src_name: Uralic languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: urj - tgt_alpha2: en - prefer_old: False - long_pair: urj-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["se", "fi", "hu", "et", "urj", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-urj-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "se", "fi", "hu", "et", "urj", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "se", "fi", "hu", "et", "urj", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #se #fi #hu #et #urj #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### urj-eng * source group: Uralic languages * target group: English * OPUS readme: urj-eng * model: transformer * source language(s): est fin fkv\_Latn hun izh kpv krl liv\_Latn mdf mhr myv sma sme udm vro * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.7, chr-F: 0.511 testset: URL, BLEU: 26.6, chr-F: 0.545 testset: URL, BLEU: 21.3, chr-F: 0.493 testset: URL, BLEU: 20.1, chr-F: 0.487 testset: URL, BLEU: 23.9, chr-F: 0.521 testset: URL, BLEU: 25.8, chr-F: 0.542 testset: URL, BLEU: 28.9, chr-F: 0.562 testset: URL, BLEU: 27.0, chr-F: 0.552 testset: URL, BLEU: 21.2, chr-F: 0.492 testset: URL, BLEU: 25.3, chr-F: 0.531 testset: URL, BLEU: 21.3, chr-F: 0.500 testset: URL, BLEU: 24.4, chr-F: 0.528 testset: URL, BLEU: 24.4, chr-F: 0.528 testset: URL, BLEU: 0.8, chr-F: 0.131 testset: URL, BLEU: 34.5, chr-F: 0.526 testset: URL, BLEU: 28.1, chr-F: 0.485 testset: URL, BLEU: 6.8, chr-F: 0.335 testset: URL, BLEU: 25.1, chr-F: 0.452 testset: URL, BLEU: 11.6, chr-F: 0.224 testset: URL, BLEU: 2.4, chr-F: 0.110 testset: URL, BLEU: 18.6, chr-F: 0.365 testset: URL, BLEU: 0.5, chr-F: 0.078 testset: URL, BLEU: 1.5, chr-F: 0.117 testset: URL, BLEU: 47.8, chr-F: 0.646 testset: URL, BLEU: 0.5, chr-F: 0.101 testset: URL, BLEU: 1.2, chr-F: 0.110 testset: URL, BLEU: 1.5, chr-F: 0.147 testset: URL, BLEU: 1.0, chr-F: 0.130 ### System Info: * hf\_name: urj-eng * source\_languages: urj * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['se', 'fi', 'hu', 'et', 'urj', 'en'] * src\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\_Latn', 'est', 'mhr', 'sma'} * tgt\_constituents: {'eng'} * src\_multilingual: True * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: urj * tgt\_alpha3: eng * short\_pair: urj-en * chrF2\_score: 0.6459999999999999 * bleu: 47.8 * brevity\_penalty: 0.993 * ref\_len: 70882.0 * src\_name: Uralic languages * tgt\_name: English * train\_date: 2020-08-01 * src\_alpha2: urj * tgt\_alpha2: en * prefer\_old: False * long\_pair: urj-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### urj-eng\n\n\n* source group: Uralic languages\n* target group: English\n* OPUS readme: urj-eng\n* model: transformer\n* source language(s): est fin fkv\\_Latn hun izh kpv krl liv\\_Latn mdf mhr myv sma sme udm vro\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.511\ntestset: URL, BLEU: 26.6, chr-F: 0.545\ntestset: URL, BLEU: 21.3, chr-F: 0.493\ntestset: URL, BLEU: 20.1, chr-F: 0.487\ntestset: URL, BLEU: 23.9, chr-F: 0.521\ntestset: URL, BLEU: 25.8, chr-F: 0.542\ntestset: URL, BLEU: 28.9, chr-F: 0.562\ntestset: URL, BLEU: 27.0, chr-F: 0.552\ntestset: URL, BLEU: 21.2, chr-F: 0.492\ntestset: URL, BLEU: 25.3, chr-F: 0.531\ntestset: URL, BLEU: 21.3, chr-F: 0.500\ntestset: URL, BLEU: 24.4, chr-F: 0.528\ntestset: URL, BLEU: 24.4, chr-F: 0.528\ntestset: URL, BLEU: 0.8, chr-F: 0.131\ntestset: URL, BLEU: 34.5, chr-F: 0.526\ntestset: URL, BLEU: 28.1, chr-F: 0.485\ntestset: URL, BLEU: 6.8, chr-F: 0.335\ntestset: URL, BLEU: 25.1, chr-F: 0.452\ntestset: URL, BLEU: 11.6, chr-F: 0.224\ntestset: URL, BLEU: 2.4, chr-F: 0.110\ntestset: URL, BLEU: 18.6, chr-F: 0.365\ntestset: URL, BLEU: 0.5, chr-F: 0.078\ntestset: URL, BLEU: 1.5, chr-F: 0.117\ntestset: URL, BLEU: 47.8, chr-F: 0.646\ntestset: URL, BLEU: 0.5, chr-F: 0.101\ntestset: URL, BLEU: 1.2, chr-F: 0.110\ntestset: URL, BLEU: 1.5, chr-F: 0.147\ntestset: URL, BLEU: 1.0, chr-F: 0.130", "### System Info:\n\n\n* hf\\_name: urj-eng\n* source\\_languages: urj\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['se', 'fi', 'hu', 'et', 'urj', 'en']\n* src\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: urj\n* tgt\\_alpha3: eng\n* short\\_pair: urj-en\n* chrF2\\_score: 0.6459999999999999\n* bleu: 47.8\n* brevity\\_penalty: 0.993\n* ref\\_len: 70882.0\n* src\\_name: Uralic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: urj\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: urj-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #se #fi #hu #et #urj #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### urj-eng\n\n\n* source group: Uralic languages\n* target group: English\n* OPUS readme: urj-eng\n* model: transformer\n* source language(s): est fin fkv\\_Latn hun izh kpv krl liv\\_Latn mdf mhr myv sma sme udm vro\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.511\ntestset: URL, BLEU: 26.6, chr-F: 0.545\ntestset: URL, BLEU: 21.3, chr-F: 0.493\ntestset: URL, BLEU: 20.1, chr-F: 0.487\ntestset: URL, BLEU: 23.9, chr-F: 0.521\ntestset: URL, BLEU: 25.8, chr-F: 0.542\ntestset: URL, BLEU: 28.9, chr-F: 0.562\ntestset: URL, BLEU: 27.0, chr-F: 0.552\ntestset: URL, BLEU: 21.2, chr-F: 0.492\ntestset: URL, BLEU: 25.3, chr-F: 0.531\ntestset: URL, BLEU: 21.3, chr-F: 0.500\ntestset: URL, BLEU: 24.4, chr-F: 0.528\ntestset: URL, BLEU: 24.4, chr-F: 0.528\ntestset: URL, BLEU: 0.8, chr-F: 0.131\ntestset: URL, BLEU: 34.5, chr-F: 0.526\ntestset: URL, BLEU: 28.1, chr-F: 0.485\ntestset: URL, BLEU: 6.8, chr-F: 0.335\ntestset: URL, BLEU: 25.1, chr-F: 0.452\ntestset: URL, BLEU: 11.6, chr-F: 0.224\ntestset: URL, BLEU: 2.4, chr-F: 0.110\ntestset: URL, BLEU: 18.6, chr-F: 0.365\ntestset: URL, BLEU: 0.5, chr-F: 0.078\ntestset: URL, BLEU: 1.5, chr-F: 0.117\ntestset: URL, BLEU: 47.8, chr-F: 0.646\ntestset: URL, BLEU: 0.5, chr-F: 0.101\ntestset: URL, BLEU: 1.2, chr-F: 0.110\ntestset: URL, BLEU: 1.5, chr-F: 0.147\ntestset: URL, BLEU: 1.0, chr-F: 0.130", "### System Info:\n\n\n* hf\\_name: urj-eng\n* source\\_languages: urj\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['se', 'fi', 'hu', 'et', 'urj', 'en']\n* src\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: urj\n* tgt\\_alpha3: eng\n* short\\_pair: urj-en\n* chrF2\\_score: 0.6459999999999999\n* bleu: 47.8\n* brevity\\_penalty: 0.993\n* ref\\_len: 70882.0\n* src\\_name: Uralic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: urj\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: urj-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 60, 777, 514 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #se #fi #hu #et #urj #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### urj-eng\n\n\n* source group: Uralic languages\n* target group: English\n* OPUS readme: urj-eng\n* model: transformer\n* source language(s): est fin fkv\\_Latn hun izh kpv krl liv\\_Latn mdf mhr myv sma sme udm vro\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.7, chr-F: 0.511\ntestset: URL, BLEU: 26.6, chr-F: 0.545\ntestset: URL, BLEU: 21.3, chr-F: 0.493\ntestset: URL, BLEU: 20.1, chr-F: 0.487\ntestset: URL, BLEU: 23.9, chr-F: 0.521\ntestset: URL, BLEU: 25.8, chr-F: 0.542\ntestset: URL, BLEU: 28.9, chr-F: 0.562\ntestset: URL, BLEU: 27.0, chr-F: 0.552\ntestset: URL, BLEU: 21.2, chr-F: 0.492\ntestset: URL, BLEU: 25.3, chr-F: 0.531\ntestset: URL, BLEU: 21.3, chr-F: 0.500\ntestset: URL, BLEU: 24.4, chr-F: 0.528\ntestset: URL, BLEU: 24.4, chr-F: 0.528\ntestset: URL, BLEU: 0.8, chr-F: 0.131\ntestset: URL, BLEU: 34.5, chr-F: 0.526\ntestset: URL, BLEU: 28.1, chr-F: 0.485\ntestset: URL, BLEU: 6.8, chr-F: 0.335\ntestset: URL, BLEU: 25.1, chr-F: 0.452\ntestset: URL, BLEU: 11.6, chr-F: 0.224\ntestset: URL, BLEU: 2.4, chr-F: 0.110\ntestset: URL, BLEU: 18.6, chr-F: 0.365\ntestset: URL, BLEU: 0.5, chr-F: 0.078\ntestset: URL, BLEU: 1.5, chr-F: 0.117\ntestset: URL, BLEU: 47.8, chr-F: 0.646\ntestset: URL, BLEU: 0.5, chr-F: 0.101\ntestset: URL, BLEU: 1.2, chr-F: 0.110\ntestset: URL, BLEU: 1.5, chr-F: 0.147\ntestset: URL, BLEU: 1.0, chr-F: 0.130### System Info:\n\n\n* hf\\_name: urj-eng\n* source\\_languages: urj\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['se', 'fi', 'hu', 'et', 'urj', 'en']\n* src\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: urj\n* tgt\\_alpha3: eng\n* short\\_pair: urj-en\n* chrF2\\_score: 0.6459999999999999\n* bleu: 47.8\n* brevity\\_penalty: 0.993\n* ref\\_len: 70882.0\n* src\\_name: Uralic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: urj\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: urj-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### urj-urj * source group: Uralic languages * target group: Uralic languages * OPUS readme: [urj-urj](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-urj/README.md) * model: transformer * source language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro * target language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.zip) * test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.test.txt) * test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.est-est.est.est | 5.1 | 0.288 | | Tatoeba-test.est-fin.est.fin | 50.9 | 0.709 | | Tatoeba-test.est-fkv.est.fkv | 0.7 | 0.215 | | Tatoeba-test.est-vep.est.vep | 1.0 | 0.154 | | Tatoeba-test.fin-est.fin.est | 55.5 | 0.718 | | Tatoeba-test.fin-fkv.fin.fkv | 1.8 | 0.254 | | Tatoeba-test.fin-hun.fin.hun | 45.0 | 0.672 | | Tatoeba-test.fin-izh.fin.izh | 7.1 | 0.492 | | Tatoeba-test.fin-krl.fin.krl | 2.6 | 0.278 | | Tatoeba-test.fkv-est.fkv.est | 0.6 | 0.099 | | Tatoeba-test.fkv-fin.fkv.fin | 15.5 | 0.444 | | Tatoeba-test.fkv-liv.fkv.liv | 0.6 | 0.101 | | Tatoeba-test.fkv-vep.fkv.vep | 0.6 | 0.113 | | Tatoeba-test.hun-fin.hun.fin | 46.3 | 0.675 | | Tatoeba-test.izh-fin.izh.fin | 13.4 | 0.431 | | Tatoeba-test.izh-krl.izh.krl | 2.9 | 0.078 | | Tatoeba-test.krl-fin.krl.fin | 14.1 | 0.439 | | Tatoeba-test.krl-izh.krl.izh | 1.0 | 0.125 | | Tatoeba-test.liv-fkv.liv.fkv | 0.9 | 0.170 | | Tatoeba-test.liv-vep.liv.vep | 2.6 | 0.176 | | Tatoeba-test.multi.multi | 32.9 | 0.580 | | Tatoeba-test.vep-est.vep.est | 3.4 | 0.265 | | Tatoeba-test.vep-fkv.vep.fkv | 0.9 | 0.239 | | Tatoeba-test.vep-liv.vep.liv | 2.6 | 0.190 | ### System Info: - hf_name: urj-urj - source_languages: urj - target_languages: urj - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urj-urj/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['se', 'fi', 'hu', 'et', 'urj'] - src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'} - tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'} - src_multilingual: True - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/urj-urj/opus-2020-07-27.test.txt - src_alpha3: urj - tgt_alpha3: urj - short_pair: urj-urj - chrF2_score: 0.58 - bleu: 32.9 - brevity_penalty: 1.0 - ref_len: 19444.0 - src_name: Uralic languages - tgt_name: Uralic languages - train_date: 2020-07-27 - src_alpha2: urj - tgt_alpha2: urj - prefer_old: False - long_pair: urj-urj - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["se", "fi", "hu", "et", "urj"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-urj-urj
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "se", "fi", "hu", "et", "urj", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "se", "fi", "hu", "et", "urj" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #se #fi #hu #et #urj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### urj-urj * source group: Uralic languages * target group: Uralic languages * OPUS readme: urj-urj * model: transformer * source language(s): est fin fkv\_Latn hun izh krl liv\_Latn vep vro * target language(s): est fin fkv\_Latn hun izh krl liv\_Latn vep vro * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 5.1, chr-F: 0.288 testset: URL, BLEU: 50.9, chr-F: 0.709 testset: URL, BLEU: 0.7, chr-F: 0.215 testset: URL, BLEU: 1.0, chr-F: 0.154 testset: URL, BLEU: 55.5, chr-F: 0.718 testset: URL, BLEU: 1.8, chr-F: 0.254 testset: URL, BLEU: 45.0, chr-F: 0.672 testset: URL, BLEU: 7.1, chr-F: 0.492 testset: URL, BLEU: 2.6, chr-F: 0.278 testset: URL, BLEU: 0.6, chr-F: 0.099 testset: URL, BLEU: 15.5, chr-F: 0.444 testset: URL, BLEU: 0.6, chr-F: 0.101 testset: URL, BLEU: 0.6, chr-F: 0.113 testset: URL, BLEU: 46.3, chr-F: 0.675 testset: URL, BLEU: 13.4, chr-F: 0.431 testset: URL, BLEU: 2.9, chr-F: 0.078 testset: URL, BLEU: 14.1, chr-F: 0.439 testset: URL, BLEU: 1.0, chr-F: 0.125 testset: URL, BLEU: 0.9, chr-F: 0.170 testset: URL, BLEU: 2.6, chr-F: 0.176 testset: URL, BLEU: 32.9, chr-F: 0.580 testset: URL, BLEU: 3.4, chr-F: 0.265 testset: URL, BLEU: 0.9, chr-F: 0.239 testset: URL, BLEU: 2.6, chr-F: 0.190 ### System Info: * hf\_name: urj-urj * source\_languages: urj * target\_languages: urj * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['se', 'fi', 'hu', 'et', 'urj'] * src\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\_Latn', 'est', 'mhr', 'sma'} * tgt\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\_Latn', 'est', 'mhr', 'sma'} * src\_multilingual: True * tgt\_multilingual: True * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: urj * tgt\_alpha3: urj * short\_pair: urj-urj * chrF2\_score: 0.58 * bleu: 32.9 * brevity\_penalty: 1.0 * ref\_len: 19444.0 * src\_name: Uralic languages * tgt\_name: Uralic languages * train\_date: 2020-07-27 * src\_alpha2: urj * tgt\_alpha2: urj * prefer\_old: False * long\_pair: urj-urj * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### urj-urj\n\n\n* source group: Uralic languages\n* target group: Uralic languages\n* OPUS readme: urj-urj\n* model: transformer\n* source language(s): est fin fkv\\_Latn hun izh krl liv\\_Latn vep vro\n* target language(s): est fin fkv\\_Latn hun izh krl liv\\_Latn vep vro\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 5.1, chr-F: 0.288\ntestset: URL, BLEU: 50.9, chr-F: 0.709\ntestset: URL, BLEU: 0.7, chr-F: 0.215\ntestset: URL, BLEU: 1.0, chr-F: 0.154\ntestset: URL, BLEU: 55.5, chr-F: 0.718\ntestset: URL, BLEU: 1.8, chr-F: 0.254\ntestset: URL, BLEU: 45.0, chr-F: 0.672\ntestset: URL, BLEU: 7.1, chr-F: 0.492\ntestset: URL, BLEU: 2.6, chr-F: 0.278\ntestset: URL, BLEU: 0.6, chr-F: 0.099\ntestset: URL, BLEU: 15.5, chr-F: 0.444\ntestset: URL, BLEU: 0.6, chr-F: 0.101\ntestset: URL, BLEU: 0.6, chr-F: 0.113\ntestset: URL, BLEU: 46.3, chr-F: 0.675\ntestset: URL, BLEU: 13.4, chr-F: 0.431\ntestset: URL, BLEU: 2.9, chr-F: 0.078\ntestset: URL, BLEU: 14.1, chr-F: 0.439\ntestset: URL, BLEU: 1.0, chr-F: 0.125\ntestset: URL, BLEU: 0.9, chr-F: 0.170\ntestset: URL, BLEU: 2.6, chr-F: 0.176\ntestset: URL, BLEU: 32.9, chr-F: 0.580\ntestset: URL, BLEU: 3.4, chr-F: 0.265\ntestset: URL, BLEU: 0.9, chr-F: 0.239\ntestset: URL, BLEU: 2.6, chr-F: 0.190", "### System Info:\n\n\n* hf\\_name: urj-urj\n* source\\_languages: urj\n* target\\_languages: urj\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['se', 'fi', 'hu', 'et', 'urj']\n* src\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* tgt\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: urj\n* tgt\\_alpha3: urj\n* short\\_pair: urj-urj\n* chrF2\\_score: 0.58\n* bleu: 32.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 19444.0\n* src\\_name: Uralic languages\n* tgt\\_name: Uralic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: urj\n* tgt\\_alpha2: urj\n* prefer\\_old: False\n* long\\_pair: urj-urj\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #se #fi #hu #et #urj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### urj-urj\n\n\n* source group: Uralic languages\n* target group: Uralic languages\n* OPUS readme: urj-urj\n* model: transformer\n* source language(s): est fin fkv\\_Latn hun izh krl liv\\_Latn vep vro\n* target language(s): est fin fkv\\_Latn hun izh krl liv\\_Latn vep vro\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 5.1, chr-F: 0.288\ntestset: URL, BLEU: 50.9, chr-F: 0.709\ntestset: URL, BLEU: 0.7, chr-F: 0.215\ntestset: URL, BLEU: 1.0, chr-F: 0.154\ntestset: URL, BLEU: 55.5, chr-F: 0.718\ntestset: URL, BLEU: 1.8, chr-F: 0.254\ntestset: URL, BLEU: 45.0, chr-F: 0.672\ntestset: URL, BLEU: 7.1, chr-F: 0.492\ntestset: URL, BLEU: 2.6, chr-F: 0.278\ntestset: URL, BLEU: 0.6, chr-F: 0.099\ntestset: URL, BLEU: 15.5, chr-F: 0.444\ntestset: URL, BLEU: 0.6, chr-F: 0.101\ntestset: URL, BLEU: 0.6, chr-F: 0.113\ntestset: URL, BLEU: 46.3, chr-F: 0.675\ntestset: URL, BLEU: 13.4, chr-F: 0.431\ntestset: URL, BLEU: 2.9, chr-F: 0.078\ntestset: URL, BLEU: 14.1, chr-F: 0.439\ntestset: URL, BLEU: 1.0, chr-F: 0.125\ntestset: URL, BLEU: 0.9, chr-F: 0.170\ntestset: URL, BLEU: 2.6, chr-F: 0.176\ntestset: URL, BLEU: 32.9, chr-F: 0.580\ntestset: URL, BLEU: 3.4, chr-F: 0.265\ntestset: URL, BLEU: 0.9, chr-F: 0.239\ntestset: URL, BLEU: 2.6, chr-F: 0.190", "### System Info:\n\n\n* hf\\_name: urj-urj\n* source\\_languages: urj\n* target\\_languages: urj\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['se', 'fi', 'hu', 'et', 'urj']\n* src\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* tgt\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: urj\n* tgt\\_alpha3: urj\n* short\\_pair: urj-urj\n* chrF2\\_score: 0.58\n* bleu: 32.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 19444.0\n* src\\_name: Uralic languages\n* tgt\\_name: Uralic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: urj\n* tgt\\_alpha2: urj\n* prefer\\_old: False\n* long\\_pair: urj-urj\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 58, 725, 586 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #se #fi #hu #et #urj #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### urj-urj\n\n\n* source group: Uralic languages\n* target group: Uralic languages\n* OPUS readme: urj-urj\n* model: transformer\n* source language(s): est fin fkv\\_Latn hun izh krl liv\\_Latn vep vro\n* target language(s): est fin fkv\\_Latn hun izh krl liv\\_Latn vep vro\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 5.1, chr-F: 0.288\ntestset: URL, BLEU: 50.9, chr-F: 0.709\ntestset: URL, BLEU: 0.7, chr-F: 0.215\ntestset: URL, BLEU: 1.0, chr-F: 0.154\ntestset: URL, BLEU: 55.5, chr-F: 0.718\ntestset: URL, BLEU: 1.8, chr-F: 0.254\ntestset: URL, BLEU: 45.0, chr-F: 0.672\ntestset: URL, BLEU: 7.1, chr-F: 0.492\ntestset: URL, BLEU: 2.6, chr-F: 0.278\ntestset: URL, BLEU: 0.6, chr-F: 0.099\ntestset: URL, BLEU: 15.5, chr-F: 0.444\ntestset: URL, BLEU: 0.6, chr-F: 0.101\ntestset: URL, BLEU: 0.6, chr-F: 0.113\ntestset: URL, BLEU: 46.3, chr-F: 0.675\ntestset: URL, BLEU: 13.4, chr-F: 0.431\ntestset: URL, BLEU: 2.9, chr-F: 0.078\ntestset: URL, BLEU: 14.1, chr-F: 0.439\ntestset: URL, BLEU: 1.0, chr-F: 0.125\ntestset: URL, BLEU: 0.9, chr-F: 0.170\ntestset: URL, BLEU: 2.6, chr-F: 0.176\ntestset: URL, BLEU: 32.9, chr-F: 0.580\ntestset: URL, BLEU: 3.4, chr-F: 0.265\ntestset: URL, BLEU: 0.9, chr-F: 0.239\ntestset: URL, BLEU: 2.6, chr-F: 0.190### System Info:\n\n\n* hf\\_name: urj-urj\n* source\\_languages: urj\n* target\\_languages: urj\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['se', 'fi', 'hu', 'et', 'urj']\n* src\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* tgt\\_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv\\_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv\\_Latn', 'est', 'mhr', 'sma'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: urj\n* tgt\\_alpha3: urj\n* short\\_pair: urj-urj\n* chrF2\\_score: 0.58\n* bleu: 32.9\n* brevity\\_penalty: 1.0\n* ref\\_len: 19444.0\n* src\\_name: Uralic languages\n* tgt\\_name: Uralic languages\n* train\\_date: 2020-07-27\n* src\\_alpha2: urj\n* tgt\\_alpha2: urj\n* prefer\\_old: False\n* long\\_pair: urj-urj\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-ve-en * source languages: ve * target languages: en * OPUS readme: [ve-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ve-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ve-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ve-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ve-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ve.en | 41.3 | 0.566 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ve-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ve", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ve #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ve-en * source languages: ve * target languages: en * OPUS readme: ve-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 41.3, chr-F: 0.566
[ "### opus-mt-ve-en\n\n\n* source languages: ve\n* target languages: en\n* OPUS readme: ve-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.3, chr-F: 0.566" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ve #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ve-en\n\n\n* source languages: ve\n* target languages: en\n* OPUS readme: ve-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.3, chr-F: 0.566" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ve #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ve-en\n\n\n* source languages: ve\n* target languages: en\n* OPUS readme: ve-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.3, chr-F: 0.566" ]
translation
transformers
### opus-mt-ve-es * source languages: ve * target languages: es * OPUS readme: [ve-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ve-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ve-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ve-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ve-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ve.es | 23.1 | 0.413 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ve-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ve", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ve #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ve-es * source languages: ve * target languages: es * OPUS readme: ve-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.1, chr-F: 0.413
[ "### opus-mt-ve-es\n\n\n* source languages: ve\n* target languages: es\n* OPUS readme: ve-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.413" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ve #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ve-es\n\n\n* source languages: ve\n* target languages: es\n* OPUS readme: ve-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.413" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ve #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ve-es\n\n\n* source languages: ve\n* target languages: es\n* OPUS readme: ve-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.413" ]
translation
transformers
### vie-deu * source group: Vietnamese * target group: German * OPUS readme: [vie-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-deu/README.md) * model: transformer-align * source language(s): vie * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.vie.deu | 27.6 | 0.484 | ### System Info: - hf_name: vie-deu - source_languages: vie - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'de'] - src_constituents: {'vie', 'vie_Hani'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-deu/opus-2020-06-17.test.txt - src_alpha3: vie - tgt_alpha3: deu - short_pair: vi-de - chrF2_score: 0.484 - bleu: 27.6 - brevity_penalty: 0.958 - ref_len: 3365.0 - src_name: Vietnamese - tgt_name: German - train_date: 2020-06-17 - src_alpha2: vi - tgt_alpha2: de - prefer_old: False - long_pair: vie-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["vi", "de"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-vi-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vi", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "vi", "de" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #vi #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### vie-deu * source group: Vietnamese * target group: German * OPUS readme: vie-deu * model: transformer-align * source language(s): vie * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.6, chr-F: 0.484 ### System Info: * hf\_name: vie-deu * source\_languages: vie * target\_languages: deu * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['vi', 'de'] * src\_constituents: {'vie', 'vie\_Hani'} * tgt\_constituents: {'deu'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: vie * tgt\_alpha3: deu * short\_pair: vi-de * chrF2\_score: 0.484 * bleu: 27.6 * brevity\_penalty: 0.958 * ref\_len: 3365.0 * src\_name: Vietnamese * tgt\_name: German * train\_date: 2020-06-17 * src\_alpha2: vi * tgt\_alpha2: de * prefer\_old: False * long\_pair: vie-deu * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### vie-deu\n\n\n* source group: Vietnamese\n* target group: German\n* OPUS readme: vie-deu\n* model: transformer-align\n* source language(s): vie\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.484", "### System Info:\n\n\n* hf\\_name: vie-deu\n* source\\_languages: vie\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'de']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: deu\n* short\\_pair: vi-de\n* chrF2\\_score: 0.484\n* bleu: 27.6\n* brevity\\_penalty: 0.958\n* ref\\_len: 3365.0\n* src\\_name: Vietnamese\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: vie-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### vie-deu\n\n\n* source group: Vietnamese\n* target group: German\n* OPUS readme: vie-deu\n* model: transformer-align\n* source language(s): vie\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.484", "### System Info:\n\n\n* hf\\_name: vie-deu\n* source\\_languages: vie\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'de']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: deu\n* short\\_pair: vi-de\n* chrF2\\_score: 0.484\n* bleu: 27.6\n* brevity\\_penalty: 0.958\n* ref\\_len: 3365.0\n* src\\_name: Vietnamese\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: vie-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 404 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### vie-deu\n\n\n* source group: Vietnamese\n* target group: German\n* OPUS readme: vie-deu\n* model: transformer-align\n* source language(s): vie\n* target language(s): deu\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.6, chr-F: 0.484### System Info:\n\n\n* hf\\_name: vie-deu\n* source\\_languages: vie\n* target\\_languages: deu\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'de']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'deu'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: deu\n* short\\_pair: vi-de\n* chrF2\\_score: 0.484\n* bleu: 27.6\n* brevity\\_penalty: 0.958\n* ref\\_len: 3365.0\n* src\\_name: Vietnamese\n* tgt\\_name: German\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: de\n* prefer\\_old: False\n* long\\_pair: vie-deu\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### vie-eng * source group: Vietnamese * target group: English * OPUS readme: [vie-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-eng/README.md) * model: transformer-align * source language(s): vie vie_Hani * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.vie.eng | 42.8 | 0.608 | ### System Info: - hf_name: vie-eng - source_languages: vie - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'en'] - src_constituents: {'vie', 'vie_Hani'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-eng/opus-2020-06-17.test.txt - src_alpha3: vie - tgt_alpha3: eng - short_pair: vi-en - chrF2_score: 0.608 - bleu: 42.8 - brevity_penalty: 0.955 - ref_len: 20241.0 - src_name: Vietnamese - tgt_name: English - train_date: 2020-06-17 - src_alpha2: vi - tgt_alpha2: en - prefer_old: False - long_pair: vie-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["vi", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-vi-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vi", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "vi", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #vi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### vie-eng * source group: Vietnamese * target group: English * OPUS readme: vie-eng * model: transformer-align * source language(s): vie vie\_Hani * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 42.8, chr-F: 0.608 ### System Info: * hf\_name: vie-eng * source\_languages: vie * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['vi', 'en'] * src\_constituents: {'vie', 'vie\_Hani'} * tgt\_constituents: {'eng'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: vie * tgt\_alpha3: eng * short\_pair: vi-en * chrF2\_score: 0.608 * bleu: 42.8 * brevity\_penalty: 0.955 * ref\_len: 20241.0 * src\_name: Vietnamese * tgt\_name: English * train\_date: 2020-06-17 * src\_alpha2: vi * tgt\_alpha2: en * prefer\_old: False * long\_pair: vie-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### vie-eng\n\n\n* source group: Vietnamese\n* target group: English\n* OPUS readme: vie-eng\n* model: transformer-align\n* source language(s): vie vie\\_Hani\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.8, chr-F: 0.608", "### System Info:\n\n\n* hf\\_name: vie-eng\n* source\\_languages: vie\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'en']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: eng\n* short\\_pair: vi-en\n* chrF2\\_score: 0.608\n* bleu: 42.8\n* brevity\\_penalty: 0.955\n* ref\\_len: 20241.0\n* src\\_name: Vietnamese\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: vie-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### vie-eng\n\n\n* source group: Vietnamese\n* target group: English\n* OPUS readme: vie-eng\n* model: transformer-align\n* source language(s): vie vie\\_Hani\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.8, chr-F: 0.608", "### System Info:\n\n\n* hf\\_name: vie-eng\n* source\\_languages: vie\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'en']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: eng\n* short\\_pair: vi-en\n* chrF2\\_score: 0.608\n* bleu: 42.8\n* brevity\\_penalty: 0.955\n* ref\\_len: 20241.0\n* src\\_name: Vietnamese\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: vie-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 136, 399 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### vie-eng\n\n\n* source group: Vietnamese\n* target group: English\n* OPUS readme: vie-eng\n* model: transformer-align\n* source language(s): vie vie\\_Hani\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.8, chr-F: 0.608### System Info:\n\n\n* hf\\_name: vie-eng\n* source\\_languages: vie\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'en']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: eng\n* short\\_pair: vi-en\n* chrF2\\_score: 0.608\n* bleu: 42.8\n* brevity\\_penalty: 0.955\n* ref\\_len: 20241.0\n* src\\_name: Vietnamese\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: vie-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### vie-epo * source group: Vietnamese * target group: Esperanto * OPUS readme: [vie-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-epo/README.md) * model: transformer-align * source language(s): vie * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.vie.epo | 12.2 | 0.332 | ### System Info: - hf_name: vie-epo - source_languages: vie - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'eo'] - src_constituents: {'vie', 'vie_Hani'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-epo/opus-2020-06-16.test.txt - src_alpha3: vie - tgt_alpha3: epo - short_pair: vi-eo - chrF2_score: 0.332 - bleu: 12.2 - brevity_penalty: 0.99 - ref_len: 13637.0 - src_name: Vietnamese - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: vi - tgt_alpha2: eo - prefer_old: False - long_pair: vie-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["vi", "eo"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-vi-eo
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vi", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "vi", "eo" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #vi #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### vie-epo * source group: Vietnamese * target group: Esperanto * OPUS readme: vie-epo * model: transformer-align * source language(s): vie * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 12.2, chr-F: 0.332 ### System Info: * hf\_name: vie-epo * source\_languages: vie * target\_languages: epo * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['vi', 'eo'] * src\_constituents: {'vie', 'vie\_Hani'} * tgt\_constituents: {'epo'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: vie * tgt\_alpha3: epo * short\_pair: vi-eo * chrF2\_score: 0.332 * bleu: 12.2 * brevity\_penalty: 0.99 * ref\_len: 13637.0 * src\_name: Vietnamese * tgt\_name: Esperanto * train\_date: 2020-06-16 * src\_alpha2: vi * tgt\_alpha2: eo * prefer\_old: False * long\_pair: vie-epo * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### vie-epo\n\n\n* source group: Vietnamese\n* target group: Esperanto\n* OPUS readme: vie-epo\n* model: transformer-align\n* source language(s): vie\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.2, chr-F: 0.332", "### System Info:\n\n\n* hf\\_name: vie-epo\n* source\\_languages: vie\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'eo']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: epo\n* short\\_pair: vi-eo\n* chrF2\\_score: 0.332\n* bleu: 12.2\n* brevity\\_penalty: 0.99\n* ref\\_len: 13637.0\n* src\\_name: Vietnamese\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: vi\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: vie-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### vie-epo\n\n\n* source group: Vietnamese\n* target group: Esperanto\n* OPUS readme: vie-epo\n* model: transformer-align\n* source language(s): vie\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.2, chr-F: 0.332", "### System Info:\n\n\n* hf\\_name: vie-epo\n* source\\_languages: vie\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'eo']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: epo\n* short\\_pair: vi-eo\n* chrF2\\_score: 0.332\n* bleu: 12.2\n* brevity\\_penalty: 0.99\n* ref\\_len: 13637.0\n* src\\_name: Vietnamese\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: vi\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: vie-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 52, 135, 407 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### vie-epo\n\n\n* source group: Vietnamese\n* target group: Esperanto\n* OPUS readme: vie-epo\n* model: transformer-align\n* source language(s): vie\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.2, chr-F: 0.332### System Info:\n\n\n* hf\\_name: vie-epo\n* source\\_languages: vie\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'eo']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: epo\n* short\\_pair: vi-eo\n* chrF2\\_score: 0.332\n* bleu: 12.2\n* brevity\\_penalty: 0.99\n* ref\\_len: 13637.0\n* src\\_name: Vietnamese\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: vi\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: vie-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### vie-spa * source group: Vietnamese * target group: Spanish * OPUS readme: [vie-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-spa/README.md) * model: transformer-align * source language(s): vie * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.vie.spa | 32.9 | 0.540 | ### System Info: - hf_name: vie-spa - source_languages: vie - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'es'] - src_constituents: {'vie', 'vie_Hani'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-spa/opus-2020-06-17.test.txt - src_alpha3: vie - tgt_alpha3: spa - short_pair: vi-es - chrF2_score: 0.54 - bleu: 32.9 - brevity_penalty: 0.953 - ref_len: 3832.0 - src_name: Vietnamese - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: vi - tgt_alpha2: es - prefer_old: False - long_pair: vie-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["vi", "es"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-vi-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vi", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "vi", "es" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #vi #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### vie-spa * source group: Vietnamese * target group: Spanish * OPUS readme: vie-spa * model: transformer-align * source language(s): vie * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 32.9, chr-F: 0.540 ### System Info: * hf\_name: vie-spa * source\_languages: vie * target\_languages: spa * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['vi', 'es'] * src\_constituents: {'vie', 'vie\_Hani'} * tgt\_constituents: {'spa'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: vie * tgt\_alpha3: spa * short\_pair: vi-es * chrF2\_score: 0.54 * bleu: 32.9 * brevity\_penalty: 0.953 * ref\_len: 3832.0 * src\_name: Vietnamese * tgt\_name: Spanish * train\_date: 2020-06-17 * src\_alpha2: vi * tgt\_alpha2: es * prefer\_old: False * long\_pair: vie-spa * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### vie-spa\n\n\n* source group: Vietnamese\n* target group: Spanish\n* OPUS readme: vie-spa\n* model: transformer-align\n* source language(s): vie\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.9, chr-F: 0.540", "### System Info:\n\n\n* hf\\_name: vie-spa\n* source\\_languages: vie\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'es']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: spa\n* short\\_pair: vi-es\n* chrF2\\_score: 0.54\n* bleu: 32.9\n* brevity\\_penalty: 0.953\n* ref\\_len: 3832.0\n* src\\_name: Vietnamese\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: vie-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### vie-spa\n\n\n* source group: Vietnamese\n* target group: Spanish\n* OPUS readme: vie-spa\n* model: transformer-align\n* source language(s): vie\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.9, chr-F: 0.540", "### System Info:\n\n\n* hf\\_name: vie-spa\n* source\\_languages: vie\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'es']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: spa\n* short\\_pair: vi-es\n* chrF2\\_score: 0.54\n* bleu: 32.9\n* brevity\\_penalty: 0.953\n* ref\\_len: 3832.0\n* src\\_name: Vietnamese\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: vie-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 130, 398 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### vie-spa\n\n\n* source group: Vietnamese\n* target group: Spanish\n* OPUS readme: vie-spa\n* model: transformer-align\n* source language(s): vie\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.9, chr-F: 0.540### System Info:\n\n\n* hf\\_name: vie-spa\n* source\\_languages: vie\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'es']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: spa\n* short\\_pair: vi-es\n* chrF2\\_score: 0.54\n* bleu: 32.9\n* brevity\\_penalty: 0.953\n* ref\\_len: 3832.0\n* src\\_name: Vietnamese\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: vie-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### vie-fra * source group: Vietnamese * target group: French * OPUS readme: [vie-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md) * model: transformer-align * source language(s): vie * target language(s): fra * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.vie.fra | 34.2 | 0.544 | ### System Info: - hf_name: vie-fra - source_languages: vie - target_languages: fra - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'fr'] - src_constituents: {'vie', 'vie_Hani'} - tgt_constituents: {'fra'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.test.txt - src_alpha3: vie - tgt_alpha3: fra - short_pair: vi-fr - chrF2_score: 0.544 - bleu: 34.2 - brevity_penalty: 0.955 - ref_len: 11519.0 - src_name: Vietnamese - tgt_name: French - train_date: 2020-06-17 - src_alpha2: vi - tgt_alpha2: fr - prefer_old: False - long_pair: vie-fra - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["vi", "fr"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-vi-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vi", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "vi", "fr" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #vi #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### vie-fra * source group: Vietnamese * target group: French * OPUS readme: vie-fra * model: transformer-align * source language(s): vie * target language(s): fra * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 34.2, chr-F: 0.544 ### System Info: * hf\_name: vie-fra * source\_languages: vie * target\_languages: fra * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['vi', 'fr'] * src\_constituents: {'vie', 'vie\_Hani'} * tgt\_constituents: {'fra'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: vie * tgt\_alpha3: fra * short\_pair: vi-fr * chrF2\_score: 0.544 * bleu: 34.2 * brevity\_penalty: 0.955 * ref\_len: 11519.0 * src\_name: Vietnamese * tgt\_name: French * train\_date: 2020-06-17 * src\_alpha2: vi * tgt\_alpha2: fr * prefer\_old: False * long\_pair: vie-fra * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### vie-fra\n\n\n* source group: Vietnamese\n* target group: French\n* OPUS readme: vie-fra\n* model: transformer-align\n* source language(s): vie\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.2, chr-F: 0.544", "### System Info:\n\n\n* hf\\_name: vie-fra\n* source\\_languages: vie\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'fr']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: fra\n* short\\_pair: vi-fr\n* chrF2\\_score: 0.544\n* bleu: 34.2\n* brevity\\_penalty: 0.955\n* ref\\_len: 11519.0\n* src\\_name: Vietnamese\n* tgt\\_name: French\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: vie-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### vie-fra\n\n\n* source group: Vietnamese\n* target group: French\n* OPUS readme: vie-fra\n* model: transformer-align\n* source language(s): vie\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.2, chr-F: 0.544", "### System Info:\n\n\n* hf\\_name: vie-fra\n* source\\_languages: vie\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'fr']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: fra\n* short\\_pair: vi-fr\n* chrF2\\_score: 0.544\n* bleu: 34.2\n* brevity\\_penalty: 0.955\n* ref\\_len: 11519.0\n* src\\_name: Vietnamese\n* tgt\\_name: French\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: vie-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 131, 399 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### vie-fra\n\n\n* source group: Vietnamese\n* target group: French\n* OPUS readme: vie-fra\n* model: transformer-align\n* source language(s): vie\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.2, chr-F: 0.544### System Info:\n\n\n* hf\\_name: vie-fra\n* source\\_languages: vie\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'fr']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: fra\n* short\\_pair: vi-fr\n* chrF2\\_score: 0.544\n* bleu: 34.2\n* brevity\\_penalty: 0.955\n* ref\\_len: 11519.0\n* src\\_name: Vietnamese\n* tgt\\_name: French\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: vie-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### vie-ita * source group: Vietnamese * target group: Italian * OPUS readme: [vie-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-ita/README.md) * model: transformer-align * source language(s): vie * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.vie.ita | 31.2 | 0.548 | ### System Info: - hf_name: vie-ita - source_languages: vie - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'it'] - src_constituents: {'vie', 'vie_Hani'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-ita/opus-2020-06-17.test.txt - src_alpha3: vie - tgt_alpha3: ita - short_pair: vi-it - chrF2_score: 0.5479999999999999 - bleu: 31.2 - brevity_penalty: 0.932 - ref_len: 1774.0 - src_name: Vietnamese - tgt_name: Italian - train_date: 2020-06-17 - src_alpha2: vi - tgt_alpha2: it - prefer_old: False - long_pair: vie-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["vi", "it"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-vi-it
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vi", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "vi", "it" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #vi #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### vie-ita * source group: Vietnamese * target group: Italian * OPUS readme: vie-ita * model: transformer-align * source language(s): vie * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 31.2, chr-F: 0.548 ### System Info: * hf\_name: vie-ita * source\_languages: vie * target\_languages: ita * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['vi', 'it'] * src\_constituents: {'vie', 'vie\_Hani'} * tgt\_constituents: {'ita'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: vie * tgt\_alpha3: ita * short\_pair: vi-it * chrF2\_score: 0.5479999999999999 * bleu: 31.2 * brevity\_penalty: 0.932 * ref\_len: 1774.0 * src\_name: Vietnamese * tgt\_name: Italian * train\_date: 2020-06-17 * src\_alpha2: vi * tgt\_alpha2: it * prefer\_old: False * long\_pair: vie-ita * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### vie-ita\n\n\n* source group: Vietnamese\n* target group: Italian\n* OPUS readme: vie-ita\n* model: transformer-align\n* source language(s): vie\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.2, chr-F: 0.548", "### System Info:\n\n\n* hf\\_name: vie-ita\n* source\\_languages: vie\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'it']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: ita\n* short\\_pair: vi-it\n* chrF2\\_score: 0.5479999999999999\n* bleu: 31.2\n* brevity\\_penalty: 0.932\n* ref\\_len: 1774.0\n* src\\_name: Vietnamese\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: vie-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### vie-ita\n\n\n* source group: Vietnamese\n* target group: Italian\n* OPUS readme: vie-ita\n* model: transformer-align\n* source language(s): vie\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.2, chr-F: 0.548", "### System Info:\n\n\n* hf\\_name: vie-ita\n* source\\_languages: vie\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'it']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: ita\n* short\\_pair: vi-it\n* chrF2\\_score: 0.5479999999999999\n* bleu: 31.2\n* brevity\\_penalty: 0.932\n* ref\\_len: 1774.0\n* src\\_name: Vietnamese\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: vie-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 416 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### vie-ita\n\n\n* source group: Vietnamese\n* target group: Italian\n* OPUS readme: vie-ita\n* model: transformer-align\n* source language(s): vie\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.2, chr-F: 0.548### System Info:\n\n\n* hf\\_name: vie-ita\n* source\\_languages: vie\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'it']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: ita\n* short\\_pair: vi-it\n* chrF2\\_score: 0.5479999999999999\n* bleu: 31.2\n* brevity\\_penalty: 0.932\n* ref\\_len: 1774.0\n* src\\_name: Vietnamese\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: vie-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### vie-rus * source group: Vietnamese * target group: Russian * OPUS readme: [vie-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-rus/README.md) * model: transformer-align * source language(s): vie * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.vie.rus | 16.9 | 0.331 | ### System Info: - hf_name: vie-rus - source_languages: vie - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'ru'] - src_constituents: {'vie', 'vie_Hani'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-rus/opus-2020-06-17.test.txt - src_alpha3: vie - tgt_alpha3: rus - short_pair: vi-ru - chrF2_score: 0.331 - bleu: 16.9 - brevity_penalty: 0.878 - ref_len: 2207.0 - src_name: Vietnamese - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: vi - tgt_alpha2: ru - prefer_old: False - long_pair: vie-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["vi", "ru"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-vi-ru
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vi", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "vi", "ru" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #vi #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### vie-rus * source group: Vietnamese * target group: Russian * OPUS readme: vie-rus * model: transformer-align * source language(s): vie * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 16.9, chr-F: 0.331 ### System Info: * hf\_name: vie-rus * source\_languages: vie * target\_languages: rus * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['vi', 'ru'] * src\_constituents: {'vie', 'vie\_Hani'} * tgt\_constituents: {'rus'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: vie * tgt\_alpha3: rus * short\_pair: vi-ru * chrF2\_score: 0.331 * bleu: 16.9 * brevity\_penalty: 0.878 * ref\_len: 2207.0 * src\_name: Vietnamese * tgt\_name: Russian * train\_date: 2020-06-17 * src\_alpha2: vi * tgt\_alpha2: ru * prefer\_old: False * long\_pair: vie-rus * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### vie-rus\n\n\n* source group: Vietnamese\n* target group: Russian\n* OPUS readme: vie-rus\n* model: transformer-align\n* source language(s): vie\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.9, chr-F: 0.331", "### System Info:\n\n\n* hf\\_name: vie-rus\n* source\\_languages: vie\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'ru']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: rus\n* short\\_pair: vi-ru\n* chrF2\\_score: 0.331\n* bleu: 16.9\n* brevity\\_penalty: 0.878\n* ref\\_len: 2207.0\n* src\\_name: Vietnamese\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: vie-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### vie-rus\n\n\n* source group: Vietnamese\n* target group: Russian\n* OPUS readme: vie-rus\n* model: transformer-align\n* source language(s): vie\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.9, chr-F: 0.331", "### System Info:\n\n\n* hf\\_name: vie-rus\n* source\\_languages: vie\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'ru']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: rus\n* short\\_pair: vi-ru\n* chrF2\\_score: 0.331\n* bleu: 16.9\n* brevity\\_penalty: 0.878\n* ref\\_len: 2207.0\n* src\\_name: Vietnamese\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: vie-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 130, 398 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vi #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### vie-rus\n\n\n* source group: Vietnamese\n* target group: Russian\n* OPUS readme: vie-rus\n* model: transformer-align\n* source language(s): vie\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.9, chr-F: 0.331### System Info:\n\n\n* hf\\_name: vie-rus\n* source\\_languages: vie\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['vi', 'ru']\n* src\\_constituents: {'vie', 'vie\\_Hani'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: vie\n* tgt\\_alpha3: rus\n* short\\_pair: vi-ru\n* chrF2\\_score: 0.331\n* bleu: 16.9\n* brevity\\_penalty: 0.878\n* ref\\_len: 2207.0\n* src\\_name: Vietnamese\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: vi\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: vie-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-vsl-es * source languages: vsl * target languages: es * OPUS readme: [vsl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/vsl-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/vsl-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/vsl-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/vsl-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.vsl.es | 91.9 | 0.944 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-vsl-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vsl", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #vsl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-vsl-es * source languages: vsl * target languages: es * OPUS readme: vsl-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 91.9, chr-F: 0.944
[ "### opus-mt-vsl-es\n\n\n* source languages: vsl\n* target languages: es\n* OPUS readme: vsl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 91.9, chr-F: 0.944" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vsl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-vsl-es\n\n\n* source languages: vsl\n* target languages: es\n* OPUS readme: vsl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 91.9, chr-F: 0.944" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #vsl #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-vsl-es\n\n\n* source languages: vsl\n* target languages: es\n* OPUS readme: vsl-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 91.9, chr-F: 0.944" ]
translation
transformers
### opus-mt-wa-en * source languages: wa * target languages: en * OPUS readme: [wa-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wa-en/README.md) * dataset: opus-enwa * model: transformer * pre-processing: normalization + SentencePiece * download original weights: [opus-enwa-2020-03-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/wa-en/opus-enwa-2020-03-21.zip) * test set translations: [opus-enwa-2020-03-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wa-en/opus-enwa-2020-03-21.test.txt) * test set scores: [opus-enwa-2020-03-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wa-en/opus-enwa-2020-03-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | enwa.fr.en | 42.6 | 0.564 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-wa-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "wa", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #wa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-wa-en * source languages: wa * target languages: en * OPUS readme: wa-en * dataset: opus-enwa * model: transformer * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 42.6, chr-F: 0.564
[ "### opus-mt-wa-en\n\n\n* source languages: wa\n* target languages: en\n* OPUS readme: wa-en\n* dataset: opus-enwa\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.6, chr-F: 0.564" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #wa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-wa-en\n\n\n* source languages: wa\n* target languages: en\n* OPUS readme: wa-en\n* dataset: opus-enwa\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.6, chr-F: 0.564" ]
[ 51, 107 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #wa #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-wa-en\n\n\n* source languages: wa\n* target languages: en\n* OPUS readme: wa-en\n* dataset: opus-enwa\n* model: transformer\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.6, chr-F: 0.564" ]
translation
transformers
### opus-mt-wal-en * source languages: wal * target languages: en * OPUS readme: [wal-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/wal-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/wal-en/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/wal-en/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/wal-en/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.wal.en | 22.5 | 0.386 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-wal-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "wal", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #wal #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-wal-en * source languages: wal * target languages: en * OPUS readme: wal-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.5, chr-F: 0.386
[ "### opus-mt-wal-en\n\n\n* source languages: wal\n* target languages: en\n* OPUS readme: wal-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.5, chr-F: 0.386" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #wal #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-wal-en\n\n\n* source languages: wal\n* target languages: en\n* OPUS readme: wal-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.5, chr-F: 0.386" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #wal #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-wal-en\n\n\n* source languages: wal\n* target languages: en\n* OPUS readme: wal-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.5, chr-F: 0.386" ]
translation
transformers
### war-eng * source group: Waray (Philippines) * target group: English * OPUS readme: [war-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/war-eng/README.md) * model: transformer-align * source language(s): war * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.war.eng | 12.3 | 0.308 | ### System Info: - hf_name: war-eng - source_languages: war - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/war-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['war', 'en'] - src_constituents: {'war'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.test.txt - src_alpha3: war - tgt_alpha3: eng - short_pair: war-en - chrF2_score: 0.308 - bleu: 12.3 - brevity_penalty: 1.0 - ref_len: 11345.0 - src_name: Waray (Philippines) - tgt_name: English - train_date: 2020-06-16 - src_alpha2: war - tgt_alpha2: en - prefer_old: False - long_pair: war-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["war", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-war-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "war", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "war", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #war #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### war-eng * source group: Waray (Philippines) * target group: English * OPUS readme: war-eng * model: transformer-align * source language(s): war * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 12.3, chr-F: 0.308 ### System Info: * hf\_name: war-eng * source\_languages: war * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['war', 'en'] * src\_constituents: {'war'} * tgt\_constituents: {'eng'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: war * tgt\_alpha3: eng * short\_pair: war-en * chrF2\_score: 0.308 * bleu: 12.3 * brevity\_penalty: 1.0 * ref\_len: 11345.0 * src\_name: Waray (Philippines) * tgt\_name: English * train\_date: 2020-06-16 * src\_alpha2: war * tgt\_alpha2: en * prefer\_old: False * long\_pair: war-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### war-eng\n\n\n* source group: Waray (Philippines)\n* target group: English\n* OPUS readme: war-eng\n* model: transformer-align\n* source language(s): war\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.3, chr-F: 0.308", "### System Info:\n\n\n* hf\\_name: war-eng\n* source\\_languages: war\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['war', 'en']\n* src\\_constituents: {'war'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: war\n* tgt\\_alpha3: eng\n* short\\_pair: war-en\n* chrF2\\_score: 0.308\n* bleu: 12.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 11345.0\n* src\\_name: Waray (Philippines)\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: war\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: war-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #war #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### war-eng\n\n\n* source group: Waray (Philippines)\n* target group: English\n* OPUS readme: war-eng\n* model: transformer-align\n* source language(s): war\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.3, chr-F: 0.308", "### System Info:\n\n\n* hf\\_name: war-eng\n* source\\_languages: war\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['war', 'en']\n* src\\_constituents: {'war'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: war\n* tgt\\_alpha3: eng\n* short\\_pair: war-en\n* chrF2\\_score: 0.308\n* bleu: 12.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 11345.0\n* src\\_name: Waray (Philippines)\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: war\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: war-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 393 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #war #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### war-eng\n\n\n* source group: Waray (Philippines)\n* target group: English\n* OPUS readme: war-eng\n* model: transformer-align\n* source language(s): war\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.3, chr-F: 0.308### System Info:\n\n\n* hf\\_name: war-eng\n* source\\_languages: war\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['war', 'en']\n* src\\_constituents: {'war'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: war\n* tgt\\_alpha3: eng\n* short\\_pair: war-en\n* chrF2\\_score: 0.308\n* bleu: 12.3\n* brevity\\_penalty: 1.0\n* ref\\_len: 11345.0\n* src\\_name: Waray (Philippines)\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: war\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: war-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]