pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
tokens_length
sequencelengths
1
723
input_texts
sequencelengths
1
1
translation
transformers
### opus-mt-it-fr * source languages: it * target languages: fr * OPUS readme: [it-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.it.fr | 67.9 | 0.792 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-it-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "it", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #it #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-it-fr * source languages: it * target languages: fr * OPUS readme: it-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 67.9, chr-F: 0.792
[ "### opus-mt-it-fr\n\n\n* source languages: it\n* target languages: fr\n* OPUS readme: it-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 67.9, chr-F: 0.792" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-it-fr\n\n\n* source languages: it\n* target languages: fr\n* OPUS readme: it-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 67.9, chr-F: 0.792" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-it-fr\n\n\n* source languages: it\n* target languages: fr\n* OPUS readme: it-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 67.9, chr-F: 0.792" ]
translation
transformers
### ita-isl * source group: Italian * target group: Icelandic * OPUS readme: [ita-isl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-isl/README.md) * model: transformer-align * source language(s): ita * target language(s): isl * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-isl/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-isl/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-isl/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ita.isl | 28.6 | 0.524 | ### System Info: - hf_name: ita-isl - source_languages: ita - target_languages: isl - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-isl/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['it', 'is'] - src_constituents: {'ita'} - tgt_constituents: {'isl'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-isl/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-isl/opus-2020-06-17.test.txt - src_alpha3: ita - tgt_alpha3: isl - short_pair: it-is - chrF2_score: 0.524 - bleu: 28.6 - brevity_penalty: 0.972 - ref_len: 1459.0 - src_name: Italian - tgt_name: Icelandic - train_date: 2020-06-17 - src_alpha2: it - tgt_alpha2: is - prefer_old: False - long_pair: ita-isl - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["it", "is"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-it-is
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "it", "is", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it", "is" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #it #is #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ita-isl * source group: Italian * target group: Icelandic * OPUS readme: ita-isl * model: transformer-align * source language(s): ita * target language(s): isl * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 28.6, chr-F: 0.524 ### System Info: * hf\_name: ita-isl * source\_languages: ita * target\_languages: isl * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['it', 'is'] * src\_constituents: {'ita'} * tgt\_constituents: {'isl'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ita * tgt\_alpha3: isl * short\_pair: it-is * chrF2\_score: 0.524 * bleu: 28.6 * brevity\_penalty: 0.972 * ref\_len: 1459.0 * src\_name: Italian * tgt\_name: Icelandic * train\_date: 2020-06-17 * src\_alpha2: it * tgt\_alpha2: is * prefer\_old: False * long\_pair: ita-isl * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ita-isl\n\n\n* source group: Italian\n* target group: Icelandic\n* OPUS readme: ita-isl\n* model: transformer-align\n* source language(s): ita\n* target language(s): isl\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.524", "### System Info:\n\n\n* hf\\_name: ita-isl\n* source\\_languages: ita\n* target\\_languages: isl\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'is']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'isl'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: isl\n* short\\_pair: it-is\n* chrF2\\_score: 0.524\n* bleu: 28.6\n* brevity\\_penalty: 0.972\n* ref\\_len: 1459.0\n* src\\_name: Italian\n* tgt\\_name: Icelandic\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: is\n* prefer\\_old: False\n* long\\_pair: ita-isl\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #is #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ita-isl\n\n\n* source group: Italian\n* target group: Icelandic\n* OPUS readme: ita-isl\n* model: transformer-align\n* source language(s): ita\n* target language(s): isl\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.524", "### System Info:\n\n\n* hf\\_name: ita-isl\n* source\\_languages: ita\n* target\\_languages: isl\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'is']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'isl'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: isl\n* short\\_pair: it-is\n* chrF2\\_score: 0.524\n* bleu: 28.6\n* brevity\\_penalty: 0.972\n* ref\\_len: 1459.0\n* src\\_name: Italian\n* tgt\\_name: Icelandic\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: is\n* prefer\\_old: False\n* long\\_pair: ita-isl\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 401 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #is #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ita-isl\n\n\n* source group: Italian\n* target group: Icelandic\n* OPUS readme: ita-isl\n* model: transformer-align\n* source language(s): ita\n* target language(s): isl\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.524### System Info:\n\n\n* hf\\_name: ita-isl\n* source\\_languages: ita\n* target\\_languages: isl\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'is']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'isl'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: isl\n* short\\_pair: it-is\n* chrF2\\_score: 0.524\n* bleu: 28.6\n* brevity\\_penalty: 0.972\n* ref\\_len: 1459.0\n* src\\_name: Italian\n* tgt\\_name: Icelandic\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: is\n* prefer\\_old: False\n* long\\_pair: ita-isl\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ita-lit * source group: Italian * target group: Lithuanian * OPUS readme: [ita-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-lit/README.md) * model: transformer-align * source language(s): ita * target language(s): lit * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-lit/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-lit/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-lit/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ita.lit | 38.1 | 0.652 | ### System Info: - hf_name: ita-lit - source_languages: ita - target_languages: lit - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-lit/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['it', 'lt'] - src_constituents: {'ita'} - tgt_constituents: {'lit'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-lit/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-lit/opus-2020-06-17.test.txt - src_alpha3: ita - tgt_alpha3: lit - short_pair: it-lt - chrF2_score: 0.652 - bleu: 38.1 - brevity_penalty: 0.9590000000000001 - ref_len: 1321.0 - src_name: Italian - tgt_name: Lithuanian - train_date: 2020-06-17 - src_alpha2: it - tgt_alpha2: lt - prefer_old: False - long_pair: ita-lit - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["it", "lt"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-it-lt
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "it", "lt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it", "lt" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #it #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ita-lit * source group: Italian * target group: Lithuanian * OPUS readme: ita-lit * model: transformer-align * source language(s): ita * target language(s): lit * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 38.1, chr-F: 0.652 ### System Info: * hf\_name: ita-lit * source\_languages: ita * target\_languages: lit * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['it', 'lt'] * src\_constituents: {'ita'} * tgt\_constituents: {'lit'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ita * tgt\_alpha3: lit * short\_pair: it-lt * chrF2\_score: 0.652 * bleu: 38.1 * brevity\_penalty: 0.9590000000000001 * ref\_len: 1321.0 * src\_name: Italian * tgt\_name: Lithuanian * train\_date: 2020-06-17 * src\_alpha2: it * tgt\_alpha2: lt * prefer\_old: False * long\_pair: ita-lit * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ita-lit\n\n\n* source group: Italian\n* target group: Lithuanian\n* OPUS readme: ita-lit\n* model: transformer-align\n* source language(s): ita\n* target language(s): lit\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.1, chr-F: 0.652", "### System Info:\n\n\n* hf\\_name: ita-lit\n* source\\_languages: ita\n* target\\_languages: lit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'lt']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'lit'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: lit\n* short\\_pair: it-lt\n* chrF2\\_score: 0.652\n* bleu: 38.1\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 1321.0\n* src\\_name: Italian\n* tgt\\_name: Lithuanian\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: lt\n* prefer\\_old: False\n* long\\_pair: ita-lit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ita-lit\n\n\n* source group: Italian\n* target group: Lithuanian\n* OPUS readme: ita-lit\n* model: transformer-align\n* source language(s): ita\n* target language(s): lit\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.1, chr-F: 0.652", "### System Info:\n\n\n* hf\\_name: ita-lit\n* source\\_languages: ita\n* target\\_languages: lit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'lt']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'lit'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: lit\n* short\\_pair: it-lt\n* chrF2\\_score: 0.652\n* bleu: 38.1\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 1321.0\n* src\\_name: Italian\n* tgt\\_name: Lithuanian\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: lt\n* prefer\\_old: False\n* long\\_pair: ita-lit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 402 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #lt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ita-lit\n\n\n* source group: Italian\n* target group: Lithuanian\n* OPUS readme: ita-lit\n* model: transformer-align\n* source language(s): ita\n* target language(s): lit\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.1, chr-F: 0.652### System Info:\n\n\n* hf\\_name: ita-lit\n* source\\_languages: ita\n* target\\_languages: lit\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'lt']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'lit'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: lit\n* short\\_pair: it-lt\n* chrF2\\_score: 0.652\n* bleu: 38.1\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 1321.0\n* src\\_name: Italian\n* tgt\\_name: Lithuanian\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: lt\n* prefer\\_old: False\n* long\\_pair: ita-lit\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ita-msa * source group: Italian * target group: Malay (macrolanguage) * OPUS readme: [ita-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-msa/README.md) * model: transformer-align * source language(s): ita * target language(s): ind zsm_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-msa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-msa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-msa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ita.msa | 26.0 | 0.536 | ### System Info: - hf_name: ita-msa - source_languages: ita - target_languages: msa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-msa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['it', 'ms'] - src_constituents: {'ita'} - tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-msa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-msa/opus-2020-06-17.test.txt - src_alpha3: ita - tgt_alpha3: msa - short_pair: it-ms - chrF2_score: 0.536 - bleu: 26.0 - brevity_penalty: 0.9209999999999999 - ref_len: 2765.0 - src_name: Italian - tgt_name: Malay (macrolanguage) - train_date: 2020-06-17 - src_alpha2: it - tgt_alpha2: ms - prefer_old: False - long_pair: ita-msa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["it", "ms"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-it-ms
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "it", "ms", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it", "ms" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #it #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ita-msa * source group: Italian * target group: Malay (macrolanguage) * OPUS readme: ita-msa * model: transformer-align * source language(s): ita * target language(s): ind zsm\_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.0, chr-F: 0.536 ### System Info: * hf\_name: ita-msa * source\_languages: ita * target\_languages: msa * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['it', 'ms'] * src\_constituents: {'ita'} * tgt\_constituents: {'zsm\_Latn', 'ind', 'max\_Latn', 'zlm\_Latn', 'min'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ita * tgt\_alpha3: msa * short\_pair: it-ms * chrF2\_score: 0.536 * bleu: 26.0 * brevity\_penalty: 0.9209999999999999 * ref\_len: 2765.0 * src\_name: Italian * tgt\_name: Malay (macrolanguage) * train\_date: 2020-06-17 * src\_alpha2: it * tgt\_alpha2: ms * prefer\_old: False * long\_pair: ita-msa * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ita-msa\n\n\n* source group: Italian\n* target group: Malay (macrolanguage)\n* OPUS readme: ita-msa\n* model: transformer-align\n* source language(s): ita\n* target language(s): ind zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.0, chr-F: 0.536", "### System Info:\n\n\n* hf\\_name: ita-msa\n* source\\_languages: ita\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ms']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: msa\n* short\\_pair: it-ms\n* chrF2\\_score: 0.536\n* bleu: 26.0\n* brevity\\_penalty: 0.9209999999999999\n* ref\\_len: 2765.0\n* src\\_name: Italian\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: ita-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ita-msa\n\n\n* source group: Italian\n* target group: Malay (macrolanguage)\n* OPUS readme: ita-msa\n* model: transformer-align\n* source language(s): ita\n* target language(s): ind zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.0, chr-F: 0.536", "### System Info:\n\n\n* hf\\_name: ita-msa\n* source\\_languages: ita\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ms']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: msa\n* short\\_pair: it-ms\n* chrF2\\_score: 0.536\n* bleu: 26.0\n* brevity\\_penalty: 0.9209999999999999\n* ref\\_len: 2765.0\n* src\\_name: Italian\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: ita-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 176, 452 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ita-msa\n\n\n* source group: Italian\n* target group: Malay (macrolanguage)\n* OPUS readme: ita-msa\n* model: transformer-align\n* source language(s): ita\n* target language(s): ind zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.0, chr-F: 0.536### System Info:\n\n\n* hf\\_name: ita-msa\n* source\\_languages: ita\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ms']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: msa\n* short\\_pair: it-ms\n* chrF2\\_score: 0.536\n* bleu: 26.0\n* brevity\\_penalty: 0.9209999999999999\n* ref\\_len: 2765.0\n* src\\_name: Italian\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: ita-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-it-sv * source languages: it * target languages: sv * OPUS readme: [it-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-sv/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-sv/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-sv/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.it.sv | 56.0 | 0.707 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-it-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "it", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #it #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-it-sv * source languages: it * target languages: sv * OPUS readme: it-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 56.0, chr-F: 0.707
[ "### opus-mt-it-sv\n\n\n* source languages: it\n* target languages: sv\n* OPUS readme: it-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.0, chr-F: 0.707" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-it-sv\n\n\n* source languages: it\n* target languages: sv\n* OPUS readme: it-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.0, chr-F: 0.707" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-it-sv\n\n\n* source languages: it\n* target languages: sv\n* OPUS readme: it-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 56.0, chr-F: 0.707" ]
translation
transformers
### ita-ukr * source group: Italian * target group: Ukrainian * OPUS readme: [ita-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-ukr/README.md) * model: transformer-align * source language(s): ita * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ukr/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ukr/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ukr/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ita.ukr | 45.9 | 0.657 | ### System Info: - hf_name: ita-ukr - source_languages: ita - target_languages: ukr - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-ukr/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['it', 'uk'] - src_constituents: {'ita'} - tgt_constituents: {'ukr'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ukr/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-ukr/opus-2020-06-17.test.txt - src_alpha3: ita - tgt_alpha3: ukr - short_pair: it-uk - chrF2_score: 0.657 - bleu: 45.9 - brevity_penalty: 0.9890000000000001 - ref_len: 25353.0 - src_name: Italian - tgt_name: Ukrainian - train_date: 2020-06-17 - src_alpha2: it - tgt_alpha2: uk - prefer_old: False - long_pair: ita-ukr - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["it", "uk"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-it-uk
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "it", "uk", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it", "uk" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #it #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ita-ukr * source group: Italian * target group: Ukrainian * OPUS readme: ita-ukr * model: transformer-align * source language(s): ita * target language(s): ukr * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 45.9, chr-F: 0.657 ### System Info: * hf\_name: ita-ukr * source\_languages: ita * target\_languages: ukr * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['it', 'uk'] * src\_constituents: {'ita'} * tgt\_constituents: {'ukr'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ita * tgt\_alpha3: ukr * short\_pair: it-uk * chrF2\_score: 0.657 * bleu: 45.9 * brevity\_penalty: 0.9890000000000001 * ref\_len: 25353.0 * src\_name: Italian * tgt\_name: Ukrainian * train\_date: 2020-06-17 * src\_alpha2: it * tgt\_alpha2: uk * prefer\_old: False * long\_pair: ita-ukr * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ita-ukr\n\n\n* source group: Italian\n* target group: Ukrainian\n* OPUS readme: ita-ukr\n* model: transformer-align\n* source language(s): ita\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.9, chr-F: 0.657", "### System Info:\n\n\n* hf\\_name: ita-ukr\n* source\\_languages: ita\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'uk']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: ukr\n* short\\_pair: it-uk\n* chrF2\\_score: 0.657\n* bleu: 45.9\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 25353.0\n* src\\_name: Italian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: ita-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ita-ukr\n\n\n* source group: Italian\n* target group: Ukrainian\n* OPUS readme: ita-ukr\n* model: transformer-align\n* source language(s): ita\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.9, chr-F: 0.657", "### System Info:\n\n\n* hf\\_name: ita-ukr\n* source\\_languages: ita\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'uk']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: ukr\n* short\\_pair: it-uk\n* chrF2\\_score: 0.657\n* bleu: 45.9\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 25353.0\n* src\\_name: Italian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: ita-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 137, 407 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #uk #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ita-ukr\n\n\n* source group: Italian\n* target group: Ukrainian\n* OPUS readme: ita-ukr\n* model: transformer-align\n* source language(s): ita\n* target language(s): ukr\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.9, chr-F: 0.657### System Info:\n\n\n* hf\\_name: ita-ukr\n* source\\_languages: ita\n* target\\_languages: ukr\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'uk']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'ukr'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: ukr\n* short\\_pair: it-uk\n* chrF2\\_score: 0.657\n* bleu: 45.9\n* brevity\\_penalty: 0.9890000000000001\n* ref\\_len: 25353.0\n* src\\_name: Italian\n* tgt\\_name: Ukrainian\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: uk\n* prefer\\_old: False\n* long\\_pair: ita-ukr\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### ita-vie * source group: Italian * target group: Vietnamese * OPUS readme: [ita-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-vie/README.md) * model: transformer-align * source language(s): ita * target language(s): vie * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-vie/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-vie/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-vie/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ita.vie | 36.2 | 0.535 | ### System Info: - hf_name: ita-vie - source_languages: ita - target_languages: vie - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-vie/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['it', 'vi'] - src_constituents: {'ita'} - tgt_constituents: {'vie', 'vie_Hani'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-vie/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-vie/opus-2020-06-17.test.txt - src_alpha3: ita - tgt_alpha3: vie - short_pair: it-vi - chrF2_score: 0.535 - bleu: 36.2 - brevity_penalty: 1.0 - ref_len: 2144.0 - src_name: Italian - tgt_name: Vietnamese - train_date: 2020-06-17 - src_alpha2: it - tgt_alpha2: vi - prefer_old: False - long_pair: ita-vie - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["it", "vi"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-it-vi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "it", "vi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it", "vi" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #it #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### ita-vie * source group: Italian * target group: Vietnamese * OPUS readme: ita-vie * model: transformer-align * source language(s): ita * target language(s): vie * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 36.2, chr-F: 0.535 ### System Info: * hf\_name: ita-vie * source\_languages: ita * target\_languages: vie * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['it', 'vi'] * src\_constituents: {'ita'} * tgt\_constituents: {'vie', 'vie\_Hani'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: ita * tgt\_alpha3: vie * short\_pair: it-vi * chrF2\_score: 0.535 * bleu: 36.2 * brevity\_penalty: 1.0 * ref\_len: 2144.0 * src\_name: Italian * tgt\_name: Vietnamese * train\_date: 2020-06-17 * src\_alpha2: it * tgt\_alpha2: vi * prefer\_old: False * long\_pair: ita-vie * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### ita-vie\n\n\n* source group: Italian\n* target group: Vietnamese\n* OPUS readme: ita-vie\n* model: transformer-align\n* source language(s): ita\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.2, chr-F: 0.535", "### System Info:\n\n\n* hf\\_name: ita-vie\n* source\\_languages: ita\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'vi']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: vie\n* short\\_pair: it-vi\n* chrF2\\_score: 0.535\n* bleu: 36.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 2144.0\n* src\\_name: Italian\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: ita-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### ita-vie\n\n\n* source group: Italian\n* target group: Vietnamese\n* OPUS readme: ita-vie\n* model: transformer-align\n* source language(s): ita\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.2, chr-F: 0.535", "### System Info:\n\n\n* hf\\_name: ita-vie\n* source\\_languages: ita\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'vi']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: vie\n* short\\_pair: it-vi\n* chrF2\\_score: 0.535\n* bleu: 36.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 2144.0\n* src\\_name: Italian\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: ita-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 403 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### ita-vie\n\n\n* source group: Italian\n* target group: Vietnamese\n* OPUS readme: ita-vie\n* model: transformer-align\n* source language(s): ita\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.2, chr-F: 0.535### System Info:\n\n\n* hf\\_name: ita-vie\n* source\\_languages: ita\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'vi']\n* src\\_constituents: {'ita'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: ita\n* tgt\\_alpha3: vie\n* short\\_pair: it-vi\n* chrF2\\_score: 0.535\n* bleu: 36.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 2144.0\n* src\\_name: Italian\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: it\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: ita-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### itc-eng * source group: Italic languages * target group: English * OPUS readme: [itc-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-eng/README.md) * model: transformer * source language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-enro-roneng.ron.eng | 36.5 | 0.628 | | newsdiscussdev2015-enfr-fraeng.fra.eng | 30.9 | 0.561 | | newsdiscusstest2015-enfr-fraeng.fra.eng | 35.5 | 0.590 | | newssyscomb2009-fraeng.fra.eng | 29.2 | 0.560 | | newssyscomb2009-itaeng.ita.eng | 32.2 | 0.583 | | newssyscomb2009-spaeng.spa.eng | 29.3 | 0.563 | | news-test2008-fraeng.fra.eng | 25.2 | 0.531 | | news-test2008-spaeng.spa.eng | 26.3 | 0.539 | | newstest2009-fraeng.fra.eng | 28.5 | 0.555 | | newstest2009-itaeng.ita.eng | 31.6 | 0.578 | | newstest2009-spaeng.spa.eng | 28.7 | 0.558 | | newstest2010-fraeng.fra.eng | 29.7 | 0.571 | | newstest2010-spaeng.spa.eng | 32.8 | 0.593 | | newstest2011-fraeng.fra.eng | 30.9 | 0.580 | | newstest2011-spaeng.spa.eng | 31.8 | 0.582 | | newstest2012-fraeng.fra.eng | 31.1 | 0.576 | | newstest2012-spaeng.spa.eng | 35.0 | 0.604 | | newstest2013-fraeng.fra.eng | 31.7 | 0.573 | | newstest2013-spaeng.spa.eng | 32.4 | 0.589 | | newstest2014-fren-fraeng.fra.eng | 34.0 | 0.606 | | newstest2016-enro-roneng.ron.eng | 34.8 | 0.608 | | Tatoeba-test.arg-eng.arg.eng | 41.5 | 0.528 | | Tatoeba-test.ast-eng.ast.eng | 36.0 | 0.519 | | Tatoeba-test.cat-eng.cat.eng | 53.7 | 0.696 | | Tatoeba-test.cos-eng.cos.eng | 56.5 | 0.640 | | Tatoeba-test.egl-eng.egl.eng | 4.6 | 0.217 | | Tatoeba-test.ext-eng.ext.eng | 39.1 | 0.547 | | Tatoeba-test.fra-eng.fra.eng | 53.4 | 0.688 | | Tatoeba-test.frm-eng.frm.eng | 22.3 | 0.409 | | Tatoeba-test.gcf-eng.gcf.eng | 18.7 | 0.308 | | Tatoeba-test.glg-eng.glg.eng | 54.8 | 0.701 | | Tatoeba-test.hat-eng.hat.eng | 42.6 | 0.583 | | Tatoeba-test.ita-eng.ita.eng | 64.8 | 0.767 | | Tatoeba-test.lad-eng.lad.eng | 14.4 | 0.433 | | Tatoeba-test.lat-eng.lat.eng | 19.5 | 0.390 | | Tatoeba-test.lij-eng.lij.eng | 8.9 | 0.280 | | Tatoeba-test.lld-eng.lld.eng | 17.4 | 0.331 | | Tatoeba-test.lmo-eng.lmo.eng | 10.8 | 0.306 | | Tatoeba-test.mfe-eng.mfe.eng | 66.0 | 0.820 | | Tatoeba-test.msa-eng.msa.eng | 40.8 | 0.590 | | Tatoeba-test.multi.eng | 47.6 | 0.634 | | Tatoeba-test.mwl-eng.mwl.eng | 41.3 | 0.707 | | Tatoeba-test.oci-eng.oci.eng | 20.3 | 0.401 | | Tatoeba-test.pap-eng.pap.eng | 53.9 | 0.642 | | Tatoeba-test.pms-eng.pms.eng | 12.2 | 0.334 | | Tatoeba-test.por-eng.por.eng | 59.3 | 0.734 | | Tatoeba-test.roh-eng.roh.eng | 17.7 | 0.420 | | Tatoeba-test.ron-eng.ron.eng | 54.5 | 0.697 | | Tatoeba-test.scn-eng.scn.eng | 40.0 | 0.443 | | Tatoeba-test.spa-eng.spa.eng | 55.9 | 0.712 | | Tatoeba-test.vec-eng.vec.eng | 11.2 | 0.304 | | Tatoeba-test.wln-eng.wln.eng | 20.9 | 0.360 | ### System Info: - hf_name: itc-eng - source_languages: itc - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc', 'en'] - src_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-eng/opus2m-2020-08-01.test.txt - src_alpha3: itc - tgt_alpha3: eng - short_pair: itc-en - chrF2_score: 0.634 - bleu: 47.6 - brevity_penalty: 0.981 - ref_len: 77633.0 - src_name: Italic languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: itc - tgt_alpha2: en - prefer_old: False - long_pair: itc-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["it", "ca", "rm", "es", "ro", "gl", "sc", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "itc", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-itc-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "it", "ca", "rm", "es", "ro", "gl", "sc", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "itc", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it", "ca", "rm", "es", "ro", "gl", "sc", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "itc", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #it #ca #rm #es #ro #gl #sc #co #wa #pt #oc #an #id #fr #ht #itc #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### itc-eng * source group: Italic languages * target group: English * OPUS readme: itc-eng * model: transformer * source language(s): arg ast cat cos egl ext fra frm\_Latn gcf\_Latn glg hat ind ita lad lad\_Latn lat\_Latn lij lld\_Latn lmo max\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\_Latn vec wln zlm\_Latn zsm\_Latn * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 36.5, chr-F: 0.628 testset: URL, BLEU: 30.9, chr-F: 0.561 testset: URL, BLEU: 35.5, chr-F: 0.590 testset: URL, BLEU: 29.2, chr-F: 0.560 testset: URL, BLEU: 32.2, chr-F: 0.583 testset: URL, BLEU: 29.3, chr-F: 0.563 testset: URL, BLEU: 25.2, chr-F: 0.531 testset: URL, BLEU: 26.3, chr-F: 0.539 testset: URL, BLEU: 28.5, chr-F: 0.555 testset: URL, BLEU: 31.6, chr-F: 0.578 testset: URL, BLEU: 28.7, chr-F: 0.558 testset: URL, BLEU: 29.7, chr-F: 0.571 testset: URL, BLEU: 32.8, chr-F: 0.593 testset: URL, BLEU: 30.9, chr-F: 0.580 testset: URL, BLEU: 31.8, chr-F: 0.582 testset: URL, BLEU: 31.1, chr-F: 0.576 testset: URL, BLEU: 35.0, chr-F: 0.604 testset: URL, BLEU: 31.7, chr-F: 0.573 testset: URL, BLEU: 32.4, chr-F: 0.589 testset: URL, BLEU: 34.0, chr-F: 0.606 testset: URL, BLEU: 34.8, chr-F: 0.608 testset: URL, BLEU: 41.5, chr-F: 0.528 testset: URL, BLEU: 36.0, chr-F: 0.519 testset: URL, BLEU: 53.7, chr-F: 0.696 testset: URL, BLEU: 56.5, chr-F: 0.640 testset: URL, BLEU: 4.6, chr-F: 0.217 testset: URL, BLEU: 39.1, chr-F: 0.547 testset: URL, BLEU: 53.4, chr-F: 0.688 testset: URL, BLEU: 22.3, chr-F: 0.409 testset: URL, BLEU: 18.7, chr-F: 0.308 testset: URL, BLEU: 54.8, chr-F: 0.701 testset: URL, BLEU: 42.6, chr-F: 0.583 testset: URL, BLEU: 64.8, chr-F: 0.767 testset: URL, BLEU: 14.4, chr-F: 0.433 testset: URL, BLEU: 19.5, chr-F: 0.390 testset: URL, BLEU: 8.9, chr-F: 0.280 testset: URL, BLEU: 17.4, chr-F: 0.331 testset: URL, BLEU: 10.8, chr-F: 0.306 testset: URL, BLEU: 66.0, chr-F: 0.820 testset: URL, BLEU: 40.8, chr-F: 0.590 testset: URL, BLEU: 47.6, chr-F: 0.634 testset: URL, BLEU: 41.3, chr-F: 0.707 testset: URL, BLEU: 20.3, chr-F: 0.401 testset: URL, BLEU: 53.9, chr-F: 0.642 testset: URL, BLEU: 12.2, chr-F: 0.334 testset: URL, BLEU: 59.3, chr-F: 0.734 testset: URL, BLEU: 17.7, chr-F: 0.420 testset: URL, BLEU: 54.5, chr-F: 0.697 testset: URL, BLEU: 40.0, chr-F: 0.443 testset: URL, BLEU: 55.9, chr-F: 0.712 testset: URL, BLEU: 11.2, chr-F: 0.304 testset: URL, BLEU: 20.9, chr-F: 0.360 ### System Info: * hf\_name: itc-eng * source\_languages: itc * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc', 'en'] * src\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\_Latn', 'lad\_Latn', 'pcd', 'lat\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\_Latn', 'srd', 'gcf\_Latn', 'lld\_Latn', 'min', 'tmw\_Latn', 'cos', 'wln', 'zlm\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\_Latn', 'frm\_Latn', 'scn', 'mfe'} * tgt\_constituents: {'eng'} * src\_multilingual: True * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: itc * tgt\_alpha3: eng * short\_pair: itc-en * chrF2\_score: 0.634 * bleu: 47.6 * brevity\_penalty: 0.981 * ref\_len: 77633.0 * src\_name: Italic languages * tgt\_name: English * train\_date: 2020-08-01 * src\_alpha2: itc * tgt\_alpha2: en * prefer\_old: False * long\_pair: itc-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### itc-eng\n\n\n* source group: Italic languages\n* target group: English\n* OPUS readme: itc-eng\n* model: transformer\n* source language(s): arg ast cat cos egl ext fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lat\\_Latn lij lld\\_Latn lmo max\\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\\_Latn vec wln zlm\\_Latn zsm\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.5, chr-F: 0.628\ntestset: URL, BLEU: 30.9, chr-F: 0.561\ntestset: URL, BLEU: 35.5, chr-F: 0.590\ntestset: URL, BLEU: 29.2, chr-F: 0.560\ntestset: URL, BLEU: 32.2, chr-F: 0.583\ntestset: URL, BLEU: 29.3, chr-F: 0.563\ntestset: URL, BLEU: 25.2, chr-F: 0.531\ntestset: URL, BLEU: 26.3, chr-F: 0.539\ntestset: URL, BLEU: 28.5, chr-F: 0.555\ntestset: URL, BLEU: 31.6, chr-F: 0.578\ntestset: URL, BLEU: 28.7, chr-F: 0.558\ntestset: URL, BLEU: 29.7, chr-F: 0.571\ntestset: URL, BLEU: 32.8, chr-F: 0.593\ntestset: URL, BLEU: 30.9, chr-F: 0.580\ntestset: URL, BLEU: 31.8, chr-F: 0.582\ntestset: URL, BLEU: 31.1, chr-F: 0.576\ntestset: URL, BLEU: 35.0, chr-F: 0.604\ntestset: URL, BLEU: 31.7, chr-F: 0.573\ntestset: URL, BLEU: 32.4, chr-F: 0.589\ntestset: URL, BLEU: 34.0, chr-F: 0.606\ntestset: URL, BLEU: 34.8, chr-F: 0.608\ntestset: URL, BLEU: 41.5, chr-F: 0.528\ntestset: URL, BLEU: 36.0, chr-F: 0.519\ntestset: URL, BLEU: 53.7, chr-F: 0.696\ntestset: URL, BLEU: 56.5, chr-F: 0.640\ntestset: URL, BLEU: 4.6, chr-F: 0.217\ntestset: URL, BLEU: 39.1, chr-F: 0.547\ntestset: URL, BLEU: 53.4, chr-F: 0.688\ntestset: URL, BLEU: 22.3, chr-F: 0.409\ntestset: URL, BLEU: 18.7, chr-F: 0.308\ntestset: URL, BLEU: 54.8, chr-F: 0.701\ntestset: URL, BLEU: 42.6, chr-F: 0.583\ntestset: URL, BLEU: 64.8, chr-F: 0.767\ntestset: URL, BLEU: 14.4, chr-F: 0.433\ntestset: URL, BLEU: 19.5, chr-F: 0.390\ntestset: URL, BLEU: 8.9, chr-F: 0.280\ntestset: URL, BLEU: 17.4, chr-F: 0.331\ntestset: URL, BLEU: 10.8, chr-F: 0.306\ntestset: URL, BLEU: 66.0, chr-F: 0.820\ntestset: URL, BLEU: 40.8, chr-F: 0.590\ntestset: URL, BLEU: 47.6, chr-F: 0.634\ntestset: URL, BLEU: 41.3, chr-F: 0.707\ntestset: URL, BLEU: 20.3, chr-F: 0.401\ntestset: URL, BLEU: 53.9, chr-F: 0.642\ntestset: URL, BLEU: 12.2, chr-F: 0.334\ntestset: URL, BLEU: 59.3, chr-F: 0.734\ntestset: URL, BLEU: 17.7, chr-F: 0.420\ntestset: URL, BLEU: 54.5, chr-F: 0.697\ntestset: URL, BLEU: 40.0, chr-F: 0.443\ntestset: URL, BLEU: 55.9, chr-F: 0.712\ntestset: URL, BLEU: 11.2, chr-F: 0.304\ntestset: URL, BLEU: 20.9, chr-F: 0.360", "### System Info:\n\n\n* hf\\_name: itc-eng\n* source\\_languages: itc\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc', 'en']\n* src\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\\_Latn', 'lad\\_Latn', 'pcd', 'lat\\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: itc\n* tgt\\_alpha3: eng\n* short\\_pair: itc-en\n* chrF2\\_score: 0.634\n* bleu: 47.6\n* brevity\\_penalty: 0.981\n* ref\\_len: 77633.0\n* src\\_name: Italic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: itc\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: itc-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #ca #rm #es #ro #gl #sc #co #wa #pt #oc #an #id #fr #ht #itc #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### itc-eng\n\n\n* source group: Italic languages\n* target group: English\n* OPUS readme: itc-eng\n* model: transformer\n* source language(s): arg ast cat cos egl ext fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lat\\_Latn lij lld\\_Latn lmo max\\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\\_Latn vec wln zlm\\_Latn zsm\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.5, chr-F: 0.628\ntestset: URL, BLEU: 30.9, chr-F: 0.561\ntestset: URL, BLEU: 35.5, chr-F: 0.590\ntestset: URL, BLEU: 29.2, chr-F: 0.560\ntestset: URL, BLEU: 32.2, chr-F: 0.583\ntestset: URL, BLEU: 29.3, chr-F: 0.563\ntestset: URL, BLEU: 25.2, chr-F: 0.531\ntestset: URL, BLEU: 26.3, chr-F: 0.539\ntestset: URL, BLEU: 28.5, chr-F: 0.555\ntestset: URL, BLEU: 31.6, chr-F: 0.578\ntestset: URL, BLEU: 28.7, chr-F: 0.558\ntestset: URL, BLEU: 29.7, chr-F: 0.571\ntestset: URL, BLEU: 32.8, chr-F: 0.593\ntestset: URL, BLEU: 30.9, chr-F: 0.580\ntestset: URL, BLEU: 31.8, chr-F: 0.582\ntestset: URL, BLEU: 31.1, chr-F: 0.576\ntestset: URL, BLEU: 35.0, chr-F: 0.604\ntestset: URL, BLEU: 31.7, chr-F: 0.573\ntestset: URL, BLEU: 32.4, chr-F: 0.589\ntestset: URL, BLEU: 34.0, chr-F: 0.606\ntestset: URL, BLEU: 34.8, chr-F: 0.608\ntestset: URL, BLEU: 41.5, chr-F: 0.528\ntestset: URL, BLEU: 36.0, chr-F: 0.519\ntestset: URL, BLEU: 53.7, chr-F: 0.696\ntestset: URL, BLEU: 56.5, chr-F: 0.640\ntestset: URL, BLEU: 4.6, chr-F: 0.217\ntestset: URL, BLEU: 39.1, chr-F: 0.547\ntestset: URL, BLEU: 53.4, chr-F: 0.688\ntestset: URL, BLEU: 22.3, chr-F: 0.409\ntestset: URL, BLEU: 18.7, chr-F: 0.308\ntestset: URL, BLEU: 54.8, chr-F: 0.701\ntestset: URL, BLEU: 42.6, chr-F: 0.583\ntestset: URL, BLEU: 64.8, chr-F: 0.767\ntestset: URL, BLEU: 14.4, chr-F: 0.433\ntestset: URL, BLEU: 19.5, chr-F: 0.390\ntestset: URL, BLEU: 8.9, chr-F: 0.280\ntestset: URL, BLEU: 17.4, chr-F: 0.331\ntestset: URL, BLEU: 10.8, chr-F: 0.306\ntestset: URL, BLEU: 66.0, chr-F: 0.820\ntestset: URL, BLEU: 40.8, chr-F: 0.590\ntestset: URL, BLEU: 47.6, chr-F: 0.634\ntestset: URL, BLEU: 41.3, chr-F: 0.707\ntestset: URL, BLEU: 20.3, chr-F: 0.401\ntestset: URL, BLEU: 53.9, chr-F: 0.642\ntestset: URL, BLEU: 12.2, chr-F: 0.334\ntestset: URL, BLEU: 59.3, chr-F: 0.734\ntestset: URL, BLEU: 17.7, chr-F: 0.420\ntestset: URL, BLEU: 54.5, chr-F: 0.697\ntestset: URL, BLEU: 40.0, chr-F: 0.443\ntestset: URL, BLEU: 55.9, chr-F: 0.712\ntestset: URL, BLEU: 11.2, chr-F: 0.304\ntestset: URL, BLEU: 20.9, chr-F: 0.360", "### System Info:\n\n\n* hf\\_name: itc-eng\n* source\\_languages: itc\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc', 'en']\n* src\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\\_Latn', 'lad\\_Latn', 'pcd', 'lat\\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: itc\n* tgt\\_alpha3: eng\n* short\\_pair: itc-en\n* chrF2\\_score: 0.634\n* bleu: 47.6\n* brevity\\_penalty: 0.981\n* ref\\_len: 77633.0\n* src\\_name: Italic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: itc\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: itc-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 85, 1394, 701 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #ca #rm #es #ro #gl #sc #co #wa #pt #oc #an #id #fr #ht #itc #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### itc-eng\n\n\n* source group: Italic languages\n* target group: English\n* OPUS readme: itc-eng\n* model: transformer\n* source language(s): arg ast cat cos egl ext fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lat\\_Latn lij lld\\_Latn lmo max\\_Latn mfe min mwl oci pap pms por roh ron scn spa tmw\\_Latn vec wln zlm\\_Latn zsm\\_Latn\n* target language(s): eng\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 36.5, chr-F: 0.628\ntestset: URL, BLEU: 30.9, chr-F: 0.561\ntestset: URL, BLEU: 35.5, chr-F: 0.590\ntestset: URL, BLEU: 29.2, chr-F: 0.560\ntestset: URL, BLEU: 32.2, chr-F: 0.583\ntestset: URL, BLEU: 29.3, chr-F: 0.563\ntestset: URL, BLEU: 25.2, chr-F: 0.531\ntestset: URL, BLEU: 26.3, chr-F: 0.539\ntestset: URL, BLEU: 28.5, chr-F: 0.555\ntestset: URL, BLEU: 31.6, chr-F: 0.578\ntestset: URL, BLEU: 28.7, chr-F: 0.558\ntestset: URL, BLEU: 29.7, chr-F: 0.571\ntestset: URL, BLEU: 32.8, chr-F: 0.593\ntestset: URL, BLEU: 30.9, chr-F: 0.580\ntestset: URL, BLEU: 31.8, chr-F: 0.582\ntestset: URL, BLEU: 31.1, chr-F: 0.576\ntestset: URL, BLEU: 35.0, chr-F: 0.604\ntestset: URL, BLEU: 31.7, chr-F: 0.573\ntestset: URL, BLEU: 32.4, chr-F: 0.589\ntestset: URL, BLEU: 34.0, chr-F: 0.606\ntestset: URL, BLEU: 34.8, chr-F: 0.608\ntestset: URL, BLEU: 41.5, chr-F: 0.528\ntestset: URL, BLEU: 36.0, chr-F: 0.519\ntestset: URL, BLEU: 53.7, chr-F: 0.696\ntestset: URL, BLEU: 56.5, chr-F: 0.640\ntestset: URL, BLEU: 4.6, chr-F: 0.217\ntestset: URL, BLEU: 39.1, chr-F: 0.547\ntestset: URL, BLEU: 53.4, chr-F: 0.688\ntestset: URL, BLEU: 22.3, chr-F: 0.409\ntestset: URL, BLEU: 18.7, chr-F: 0.308\ntestset: URL, BLEU: 54.8, chr-F: 0.701\ntestset: URL, BLEU: 42.6, chr-F: 0.583\ntestset: URL, BLEU: 64.8, chr-F: 0.767\ntestset: URL, BLEU: 14.4, chr-F: 0.433\ntestset: URL, BLEU: 19.5, chr-F: 0.390\ntestset: URL, BLEU: 8.9, chr-F: 0.280\ntestset: URL, BLEU: 17.4, chr-F: 0.331\ntestset: URL, BLEU: 10.8, chr-F: 0.306\ntestset: URL, BLEU: 66.0, chr-F: 0.820\ntestset: URL, BLEU: 40.8, chr-F: 0.590\ntestset: URL, BLEU: 47.6, chr-F: 0.634\ntestset: URL, BLEU: 41.3, chr-F: 0.707\ntestset: URL, BLEU: 20.3, chr-F: 0.401\ntestset: URL, BLEU: 53.9, chr-F: 0.642\ntestset: URL, BLEU: 12.2, chr-F: 0.334\ntestset: URL, BLEU: 59.3, chr-F: 0.734\ntestset: URL, BLEU: 17.7, chr-F: 0.420\ntestset: URL, BLEU: 54.5, chr-F: 0.697\ntestset: URL, BLEU: 40.0, chr-F: 0.443\ntestset: URL, BLEU: 55.9, chr-F: 0.712\ntestset: URL, BLEU: 11.2, chr-F: 0.304\ntestset: URL, BLEU: 20.9, chr-F: 0.360### System Info:\n\n\n* hf\\_name: itc-eng\n* source\\_languages: itc\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc', 'en']\n* src\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\\_Latn', 'lad\\_Latn', 'pcd', 'lat\\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: True\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: itc\n* tgt\\_alpha3: eng\n* short\\_pair: itc-en\n* chrF2\\_score: 0.634\n* bleu: 47.6\n* brevity\\_penalty: 0.981\n* ref\\_len: 77633.0\n* src\\_name: Italic languages\n* tgt\\_name: English\n* train\\_date: 2020-08-01\n* src\\_alpha2: itc\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: itc-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### itc-itc * source group: Italic languages * target group: Italic languages * OPUS readme: [itc-itc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-itc/README.md) * model: transformer * source language(s): arg ast bjn cat cos egl fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Grek lat_Latn lij lld_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm_Latn * target language(s): arg ast bjn cat cos egl fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Grek lat_Latn lij lld_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.zip) * test set translations: [opus-2020-07-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.test.txt) * test set scores: [opus-2020-07-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.arg-fra.arg.fra | 40.8 | 0.501 | | Tatoeba-test.arg-spa.arg.spa | 59.9 | 0.739 | | Tatoeba-test.ast-fra.ast.fra | 45.4 | 0.628 | | Tatoeba-test.ast-por.ast.por | 100.0 | 1.000 | | Tatoeba-test.ast-spa.ast.spa | 46.8 | 0.636 | | Tatoeba-test.cat-fra.cat.fra | 51.6 | 0.689 | | Tatoeba-test.cat-ita.cat.ita | 49.2 | 0.699 | | Tatoeba-test.cat-por.cat.por | 48.0 | 0.688 | | Tatoeba-test.cat-ron.cat.ron | 35.4 | 0.719 | | Tatoeba-test.cat-spa.cat.spa | 69.0 | 0.826 | | Tatoeba-test.cos-fra.cos.fra | 22.3 | 0.383 | | Tatoeba-test.cos-pms.cos.pms | 3.4 | 0.199 | | Tatoeba-test.egl-fra.egl.fra | 9.5 | 0.283 | | Tatoeba-test.egl-ita.egl.ita | 3.0 | 0.206 | | Tatoeba-test.egl-spa.egl.spa | 3.7 | 0.194 | | Tatoeba-test.fra-arg.fra.arg | 3.8 | 0.090 | | Tatoeba-test.fra-ast.fra.ast | 25.9 | 0.457 | | Tatoeba-test.fra-cat.fra.cat | 42.2 | 0.637 | | Tatoeba-test.fra-cos.fra.cos | 3.3 | 0.185 | | Tatoeba-test.fra-egl.fra.egl | 2.2 | 0.120 | | Tatoeba-test.fra-frm.fra.frm | 1.0 | 0.191 | | Tatoeba-test.fra-gcf.fra.gcf | 0.2 | 0.099 | | Tatoeba-test.fra-glg.fra.glg | 40.5 | 0.625 | | Tatoeba-test.fra-hat.fra.hat | 22.6 | 0.472 | | Tatoeba-test.fra-ita.fra.ita | 46.7 | 0.679 | | Tatoeba-test.fra-lad.fra.lad | 15.9 | 0.345 | | Tatoeba-test.fra-lat.fra.lat | 2.9 | 0.247 | | Tatoeba-test.fra-lij.fra.lij | 1.0 | 0.201 | | Tatoeba-test.fra-lld.fra.lld | 1.1 | 0.257 | | Tatoeba-test.fra-lmo.fra.lmo | 1.2 | 0.241 | | Tatoeba-test.fra-msa.fra.msa | 0.4 | 0.111 | | Tatoeba-test.fra-oci.fra.oci | 7.3 | 0.322 | | Tatoeba-test.fra-pap.fra.pap | 69.8 | 0.912 | | Tatoeba-test.fra-pcd.fra.pcd | 0.6 | 0.144 | | Tatoeba-test.fra-pms.fra.pms | 1.0 | 0.181 | | Tatoeba-test.fra-por.fra.por | 39.7 | 0.619 | | Tatoeba-test.fra-roh.fra.roh | 5.7 | 0.286 | | Tatoeba-test.fra-ron.fra.ron | 36.4 | 0.591 | | Tatoeba-test.fra-scn.fra.scn | 2.1 | 0.101 | | Tatoeba-test.fra-spa.fra.spa | 47.5 | 0.670 | | Tatoeba-test.fra-srd.fra.srd | 2.8 | 0.306 | | Tatoeba-test.fra-vec.fra.vec | 3.0 | 0.345 | | Tatoeba-test.fra-wln.fra.wln | 3.5 | 0.212 | | Tatoeba-test.frm-fra.frm.fra | 11.4 | 0.472 | | Tatoeba-test.gcf-fra.gcf.fra | 7.1 | 0.267 | | Tatoeba-test.gcf-lad.gcf.lad | 0.0 | 0.170 | | Tatoeba-test.gcf-por.gcf.por | 0.0 | 0.230 | | Tatoeba-test.gcf-spa.gcf.spa | 13.4 | 0.314 | | Tatoeba-test.glg-fra.glg.fra | 54.7 | 0.702 | | Tatoeba-test.glg-ita.glg.ita | 40.1 | 0.661 | | Tatoeba-test.glg-por.glg.por | 57.6 | 0.748 | | Tatoeba-test.glg-spa.glg.spa | 70.0 | 0.817 | | Tatoeba-test.hat-fra.hat.fra | 14.2 | 0.419 | | Tatoeba-test.hat-spa.hat.spa | 17.9 | 0.449 | | Tatoeba-test.ita-cat.ita.cat | 51.0 | 0.693 | | Tatoeba-test.ita-egl.ita.egl | 1.1 | 0.114 | | Tatoeba-test.ita-fra.ita.fra | 58.2 | 0.727 | | Tatoeba-test.ita-glg.ita.glg | 41.7 | 0.652 | | Tatoeba-test.ita-lad.ita.lad | 17.5 | 0.419 | | Tatoeba-test.ita-lat.ita.lat | 7.1 | 0.294 | | Tatoeba-test.ita-lij.ita.lij | 1.0 | 0.208 | | Tatoeba-test.ita-msa.ita.msa | 0.9 | 0.115 | | Tatoeba-test.ita-oci.ita.oci | 12.3 | 0.378 | | Tatoeba-test.ita-pms.ita.pms | 1.6 | 0.182 | | Tatoeba-test.ita-por.ita.por | 44.8 | 0.665 | | Tatoeba-test.ita-ron.ita.ron | 43.3 | 0.653 | | Tatoeba-test.ita-spa.ita.spa | 56.6 | 0.733 | | Tatoeba-test.ita-vec.ita.vec | 2.0 | 0.187 | | Tatoeba-test.lad-fra.lad.fra | 30.4 | 0.458 | | Tatoeba-test.lad-gcf.lad.gcf | 0.0 | 0.163 | | Tatoeba-test.lad-ita.lad.ita | 12.3 | 0.426 | | Tatoeba-test.lad-lat.lad.lat | 1.6 | 0.178 | | Tatoeba-test.lad-por.lad.por | 8.8 | 0.394 | | Tatoeba-test.lad-ron.lad.ron | 78.3 | 0.717 | | Tatoeba-test.lad-spa.lad.spa | 28.3 | 0.531 | | Tatoeba-test.lat-fra.lat.fra | 9.4 | 0.300 | | Tatoeba-test.lat-ita.lat.ita | 20.0 | 0.421 | | Tatoeba-test.lat-lad.lat.lad | 3.8 | 0.173 | | Tatoeba-test.lat-por.lat.por | 13.0 | 0.354 | | Tatoeba-test.lat-ron.lat.ron | 14.0 | 0.358 | | Tatoeba-test.lat-spa.lat.spa | 21.8 | 0.436 | | Tatoeba-test.lij-fra.lij.fra | 13.8 | 0.346 | | Tatoeba-test.lij-ita.lij.ita | 14.7 | 0.442 | | Tatoeba-test.lld-fra.lld.fra | 18.8 | 0.428 | | Tatoeba-test.lld-spa.lld.spa | 11.1 | 0.377 | | Tatoeba-test.lmo-fra.lmo.fra | 11.0 | 0.329 | | Tatoeba-test.msa-fra.msa.fra | 0.8 | 0.129 | | Tatoeba-test.msa-ita.msa.ita | 1.1 | 0.138 | | Tatoeba-test.msa-msa.msa.msa | 19.1 | 0.453 | | Tatoeba-test.msa-pap.msa.pap | 0.0 | 0.037 | | Tatoeba-test.msa-por.msa.por | 2.4 | 0.155 | | Tatoeba-test.msa-ron.msa.ron | 1.2 | 0.129 | | Tatoeba-test.msa-spa.msa.spa | 1.0 | 0.139 | | Tatoeba-test.multi.multi | 40.8 | 0.599 | | Tatoeba-test.mwl-por.mwl.por | 35.4 | 0.561 | | Tatoeba-test.oci-fra.oci.fra | 24.5 | 0.467 | | Tatoeba-test.oci-ita.oci.ita | 23.3 | 0.493 | | Tatoeba-test.oci-spa.oci.spa | 26.1 | 0.505 | | Tatoeba-test.pap-fra.pap.fra | 31.0 | 0.629 | | Tatoeba-test.pap-msa.pap.msa | 0.0 | 0.051 | | Tatoeba-test.pcd-fra.pcd.fra | 13.8 | 0.381 | | Tatoeba-test.pcd-spa.pcd.spa | 2.6 | 0.227 | | Tatoeba-test.pms-cos.pms.cos | 3.4 | 0.217 | | Tatoeba-test.pms-fra.pms.fra | 13.4 | 0.347 | | Tatoeba-test.pms-ita.pms.ita | 13.0 | 0.373 | | Tatoeba-test.pms-spa.pms.spa | 13.1 | 0.374 | | Tatoeba-test.por-ast.por.ast | 100.0 | 1.000 | | Tatoeba-test.por-cat.por.cat | 45.1 | 0.673 | | Tatoeba-test.por-fra.por.fra | 52.5 | 0.698 | | Tatoeba-test.por-gcf.por.gcf | 16.0 | 0.128 | | Tatoeba-test.por-glg.por.glg | 57.5 | 0.750 | | Tatoeba-test.por-ita.por.ita | 50.1 | 0.710 | | Tatoeba-test.por-lad.por.lad | 15.7 | 0.341 | | Tatoeba-test.por-lat.por.lat | 11.1 | 0.362 | | Tatoeba-test.por-msa.por.msa | 2.4 | 0.136 | | Tatoeba-test.por-mwl.por.mwl | 30.5 | 0.559 | | Tatoeba-test.por-roh.por.roh | 0.0 | 0.132 | | Tatoeba-test.por-ron.por.ron | 40.0 | 0.632 | | Tatoeba-test.por-spa.por.spa | 58.6 | 0.756 | | Tatoeba-test.roh-fra.roh.fra | 23.1 | 0.564 | | Tatoeba-test.roh-por.roh.por | 21.4 | 0.347 | | Tatoeba-test.roh-spa.roh.spa | 19.8 | 0.489 | | Tatoeba-test.ron-cat.ron.cat | 59.5 | 0.854 | | Tatoeba-test.ron-fra.ron.fra | 47.4 | 0.647 | | Tatoeba-test.ron-ita.ron.ita | 45.7 | 0.683 | | Tatoeba-test.ron-lad.ron.lad | 44.2 | 0.712 | | Tatoeba-test.ron-lat.ron.lat | 14.8 | 0.449 | | Tatoeba-test.ron-msa.ron.msa | 1.2 | 0.098 | | Tatoeba-test.ron-por.ron.por | 42.7 | 0.650 | | Tatoeba-test.ron-spa.ron.spa | 50.4 | 0.686 | | Tatoeba-test.scn-fra.scn.fra | 2.4 | 0.180 | | Tatoeba-test.scn-spa.scn.spa | 5.1 | 0.212 | | Tatoeba-test.spa-arg.spa.arg | 10.8 | 0.267 | | Tatoeba-test.spa-ast.spa.ast | 24.6 | 0.514 | | Tatoeba-test.spa-cat.spa.cat | 61.6 | 0.783 | | Tatoeba-test.spa-egl.spa.egl | 2.2 | 0.106 | | Tatoeba-test.spa-fra.spa.fra | 51.1 | 0.683 | | Tatoeba-test.spa-gcf.spa.gcf | 7.8 | 0.067 | | Tatoeba-test.spa-glg.spa.glg | 62.8 | 0.776 | | Tatoeba-test.spa-hat.spa.hat | 16.6 | 0.398 | | Tatoeba-test.spa-ita.spa.ita | 51.8 | 0.718 | | Tatoeba-test.spa-lad.spa.lad | 14.6 | 0.393 | | Tatoeba-test.spa-lat.spa.lat | 21.5 | 0.486 | | Tatoeba-test.spa-lld.spa.lld | 2.0 | 0.222 | | Tatoeba-test.spa-msa.spa.msa | 0.8 | 0.113 | | Tatoeba-test.spa-oci.spa.oci | 10.3 | 0.377 | | Tatoeba-test.spa-pcd.spa.pcd | 0.9 | 0.115 | | Tatoeba-test.spa-pms.spa.pms | 1.5 | 0.194 | | Tatoeba-test.spa-por.spa.por | 49.4 | 0.698 | | Tatoeba-test.spa-roh.spa.roh | 4.6 | 0.261 | | Tatoeba-test.spa-ron.spa.ron | 39.1 | 0.618 | | Tatoeba-test.spa-scn.spa.scn | 2.0 | 0.113 | | Tatoeba-test.spa-wln.spa.wln | 8.7 | 0.295 | | Tatoeba-test.srd-fra.srd.fra | 6.7 | 0.369 | | Tatoeba-test.vec-fra.vec.fra | 59.9 | 0.608 | | Tatoeba-test.vec-ita.vec.ita | 14.2 | 0.405 | | Tatoeba-test.wln-fra.wln.fra | 8.9 | 0.344 | | Tatoeba-test.wln-spa.wln.spa | 9.6 | 0.298 | ### System Info: - hf_name: itc-itc - source_languages: itc - target_languages: itc - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-itc/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc'] - src_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'} - tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'} - src_multilingual: True - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.test.txt - src_alpha3: itc - tgt_alpha3: itc - short_pair: itc-itc - chrF2_score: 0.599 - bleu: 40.8 - brevity_penalty: 0.968 - ref_len: 77448.0 - src_name: Italic languages - tgt_name: Italic languages - train_date: 2020-07-07 - src_alpha2: itc - tgt_alpha2: itc - prefer_old: False - long_pair: itc-itc - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["it", "ca", "rm", "es", "ro", "gl", "sc", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "itc"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-itc-itc
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "it", "ca", "rm", "es", "ro", "gl", "sc", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "itc", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "it", "ca", "rm", "es", "ro", "gl", "sc", "co", "wa", "pt", "oc", "an", "id", "fr", "ht", "itc" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #it #ca #rm #es #ro #gl #sc #co #wa #pt #oc #an #id #fr #ht #itc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### itc-itc * source group: Italic languages * target group: Italic languages * OPUS readme: itc-itc * model: transformer * source language(s): arg ast bjn cat cos egl fra frm\_Latn gcf\_Latn glg hat ind ita lad lad\_Latn lat\_Grek lat\_Latn lij lld\_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm\_Latn * target language(s): arg ast bjn cat cos egl fra frm\_Latn gcf\_Latn glg hat ind ita lad lad\_Latn lat\_Grek lat\_Latn lij lld\_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm\_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 40.8, chr-F: 0.501 testset: URL, BLEU: 59.9, chr-F: 0.739 testset: URL, BLEU: 45.4, chr-F: 0.628 testset: URL, BLEU: 100.0, chr-F: 1.000 testset: URL, BLEU: 46.8, chr-F: 0.636 testset: URL, BLEU: 51.6, chr-F: 0.689 testset: URL, BLEU: 49.2, chr-F: 0.699 testset: URL, BLEU: 48.0, chr-F: 0.688 testset: URL, BLEU: 35.4, chr-F: 0.719 testset: URL, BLEU: 69.0, chr-F: 0.826 testset: URL, BLEU: 22.3, chr-F: 0.383 testset: URL, BLEU: 3.4, chr-F: 0.199 testset: URL, BLEU: 9.5, chr-F: 0.283 testset: URL, BLEU: 3.0, chr-F: 0.206 testset: URL, BLEU: 3.7, chr-F: 0.194 testset: URL, BLEU: 3.8, chr-F: 0.090 testset: URL, BLEU: 25.9, chr-F: 0.457 testset: URL, BLEU: 42.2, chr-F: 0.637 testset: URL, BLEU: 3.3, chr-F: 0.185 testset: URL, BLEU: 2.2, chr-F: 0.120 testset: URL, BLEU: 1.0, chr-F: 0.191 testset: URL, BLEU: 0.2, chr-F: 0.099 testset: URL, BLEU: 40.5, chr-F: 0.625 testset: URL, BLEU: 22.6, chr-F: 0.472 testset: URL, BLEU: 46.7, chr-F: 0.679 testset: URL, BLEU: 15.9, chr-F: 0.345 testset: URL, BLEU: 2.9, chr-F: 0.247 testset: URL, BLEU: 1.0, chr-F: 0.201 testset: URL, BLEU: 1.1, chr-F: 0.257 testset: URL, BLEU: 1.2, chr-F: 0.241 testset: URL, BLEU: 0.4, chr-F: 0.111 testset: URL, BLEU: 7.3, chr-F: 0.322 testset: URL, BLEU: 69.8, chr-F: 0.912 testset: URL, BLEU: 0.6, chr-F: 0.144 testset: URL, BLEU: 1.0, chr-F: 0.181 testset: URL, BLEU: 39.7, chr-F: 0.619 testset: URL, BLEU: 5.7, chr-F: 0.286 testset: URL, BLEU: 36.4, chr-F: 0.591 testset: URL, BLEU: 2.1, chr-F: 0.101 testset: URL, BLEU: 47.5, chr-F: 0.670 testset: URL, BLEU: 2.8, chr-F: 0.306 testset: URL, BLEU: 3.0, chr-F: 0.345 testset: URL, BLEU: 3.5, chr-F: 0.212 testset: URL, BLEU: 11.4, chr-F: 0.472 testset: URL, BLEU: 7.1, chr-F: 0.267 testset: URL, BLEU: 0.0, chr-F: 0.170 testset: URL, BLEU: 0.0, chr-F: 0.230 testset: URL, BLEU: 13.4, chr-F: 0.314 testset: URL, BLEU: 54.7, chr-F: 0.702 testset: URL, BLEU: 40.1, chr-F: 0.661 testset: URL, BLEU: 57.6, chr-F: 0.748 testset: URL, BLEU: 70.0, chr-F: 0.817 testset: URL, BLEU: 14.2, chr-F: 0.419 testset: URL, BLEU: 17.9, chr-F: 0.449 testset: URL, BLEU: 51.0, chr-F: 0.693 testset: URL, BLEU: 1.1, chr-F: 0.114 testset: URL, BLEU: 58.2, chr-F: 0.727 testset: URL, BLEU: 41.7, chr-F: 0.652 testset: URL, BLEU: 17.5, chr-F: 0.419 testset: URL, BLEU: 7.1, chr-F: 0.294 testset: URL, BLEU: 1.0, chr-F: 0.208 testset: URL, BLEU: 0.9, chr-F: 0.115 testset: URL, BLEU: 12.3, chr-F: 0.378 testset: URL, BLEU: 1.6, chr-F: 0.182 testset: URL, BLEU: 44.8, chr-F: 0.665 testset: URL, BLEU: 43.3, chr-F: 0.653 testset: URL, BLEU: 56.6, chr-F: 0.733 testset: URL, BLEU: 2.0, chr-F: 0.187 testset: URL, BLEU: 30.4, chr-F: 0.458 testset: URL, BLEU: 0.0, chr-F: 0.163 testset: URL, BLEU: 12.3, chr-F: 0.426 testset: URL, BLEU: 1.6, chr-F: 0.178 testset: URL, BLEU: 8.8, chr-F: 0.394 testset: URL, BLEU: 78.3, chr-F: 0.717 testset: URL, BLEU: 28.3, chr-F: 0.531 testset: URL, BLEU: 9.4, chr-F: 0.300 testset: URL, BLEU: 20.0, chr-F: 0.421 testset: URL, BLEU: 3.8, chr-F: 0.173 testset: URL, BLEU: 13.0, chr-F: 0.354 testset: URL, BLEU: 14.0, chr-F: 0.358 testset: URL, BLEU: 21.8, chr-F: 0.436 testset: URL, BLEU: 13.8, chr-F: 0.346 testset: URL, BLEU: 14.7, chr-F: 0.442 testset: URL, BLEU: 18.8, chr-F: 0.428 testset: URL, BLEU: 11.1, chr-F: 0.377 testset: URL, BLEU: 11.0, chr-F: 0.329 testset: URL, BLEU: 0.8, chr-F: 0.129 testset: URL, BLEU: 1.1, chr-F: 0.138 testset: URL, BLEU: 19.1, chr-F: 0.453 testset: URL, BLEU: 0.0, chr-F: 0.037 testset: URL, BLEU: 2.4, chr-F: 0.155 testset: URL, BLEU: 1.2, chr-F: 0.129 testset: URL, BLEU: 1.0, chr-F: 0.139 testset: URL, BLEU: 40.8, chr-F: 0.599 testset: URL, BLEU: 35.4, chr-F: 0.561 testset: URL, BLEU: 24.5, chr-F: 0.467 testset: URL, BLEU: 23.3, chr-F: 0.493 testset: URL, BLEU: 26.1, chr-F: 0.505 testset: URL, BLEU: 31.0, chr-F: 0.629 testset: URL, BLEU: 0.0, chr-F: 0.051 testset: URL, BLEU: 13.8, chr-F: 0.381 testset: URL, BLEU: 2.6, chr-F: 0.227 testset: URL, BLEU: 3.4, chr-F: 0.217 testset: URL, BLEU: 13.4, chr-F: 0.347 testset: URL, BLEU: 13.0, chr-F: 0.373 testset: URL, BLEU: 13.1, chr-F: 0.374 testset: URL, BLEU: 100.0, chr-F: 1.000 testset: URL, BLEU: 45.1, chr-F: 0.673 testset: URL, BLEU: 52.5, chr-F: 0.698 testset: URL, BLEU: 16.0, chr-F: 0.128 testset: URL, BLEU: 57.5, chr-F: 0.750 testset: URL, BLEU: 50.1, chr-F: 0.710 testset: URL, BLEU: 15.7, chr-F: 0.341 testset: URL, BLEU: 11.1, chr-F: 0.362 testset: URL, BLEU: 2.4, chr-F: 0.136 testset: URL, BLEU: 30.5, chr-F: 0.559 testset: URL, BLEU: 0.0, chr-F: 0.132 testset: URL, BLEU: 40.0, chr-F: 0.632 testset: URL, BLEU: 58.6, chr-F: 0.756 testset: URL, BLEU: 23.1, chr-F: 0.564 testset: URL, BLEU: 21.4, chr-F: 0.347 testset: URL, BLEU: 19.8, chr-F: 0.489 testset: URL, BLEU: 59.5, chr-F: 0.854 testset: URL, BLEU: 47.4, chr-F: 0.647 testset: URL, BLEU: 45.7, chr-F: 0.683 testset: URL, BLEU: 44.2, chr-F: 0.712 testset: URL, BLEU: 14.8, chr-F: 0.449 testset: URL, BLEU: 1.2, chr-F: 0.098 testset: URL, BLEU: 42.7, chr-F: 0.650 testset: URL, BLEU: 50.4, chr-F: 0.686 testset: URL, BLEU: 2.4, chr-F: 0.180 testset: URL, BLEU: 5.1, chr-F: 0.212 testset: URL, BLEU: 10.8, chr-F: 0.267 testset: URL, BLEU: 24.6, chr-F: 0.514 testset: URL, BLEU: 61.6, chr-F: 0.783 testset: URL, BLEU: 2.2, chr-F: 0.106 testset: URL, BLEU: 51.1, chr-F: 0.683 testset: URL, BLEU: 7.8, chr-F: 0.067 testset: URL, BLEU: 62.8, chr-F: 0.776 testset: URL, BLEU: 16.6, chr-F: 0.398 testset: URL, BLEU: 51.8, chr-F: 0.718 testset: URL, BLEU: 14.6, chr-F: 0.393 testset: URL, BLEU: 21.5, chr-F: 0.486 testset: URL, BLEU: 2.0, chr-F: 0.222 testset: URL, BLEU: 0.8, chr-F: 0.113 testset: URL, BLEU: 10.3, chr-F: 0.377 testset: URL, BLEU: 0.9, chr-F: 0.115 testset: URL, BLEU: 1.5, chr-F: 0.194 testset: URL, BLEU: 49.4, chr-F: 0.698 testset: URL, BLEU: 4.6, chr-F: 0.261 testset: URL, BLEU: 39.1, chr-F: 0.618 testset: URL, BLEU: 2.0, chr-F: 0.113 testset: URL, BLEU: 8.7, chr-F: 0.295 testset: URL, BLEU: 6.7, chr-F: 0.369 testset: URL, BLEU: 59.9, chr-F: 0.608 testset: URL, BLEU: 14.2, chr-F: 0.405 testset: URL, BLEU: 8.9, chr-F: 0.344 testset: URL, BLEU: 9.6, chr-F: 0.298 ### System Info: * hf\_name: itc-itc * source\_languages: itc * target\_languages: itc * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc'] * src\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\_Latn', 'lad\_Latn', 'pcd', 'lat\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\_Latn', 'srd', 'gcf\_Latn', 'lld\_Latn', 'min', 'tmw\_Latn', 'cos', 'wln', 'zlm\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\_Latn', 'frm\_Latn', 'scn', 'mfe'} * tgt\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\_Latn', 'lad\_Latn', 'pcd', 'lat\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\_Latn', 'srd', 'gcf\_Latn', 'lld\_Latn', 'min', 'tmw\_Latn', 'cos', 'wln', 'zlm\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\_Latn', 'frm\_Latn', 'scn', 'mfe'} * src\_multilingual: True * tgt\_multilingual: True * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: itc * tgt\_alpha3: itc * short\_pair: itc-itc * chrF2\_score: 0.599 * bleu: 40.8 * brevity\_penalty: 0.968 * ref\_len: 77448.0 * src\_name: Italic languages * tgt\_name: Italic languages * train\_date: 2020-07-07 * src\_alpha2: itc * tgt\_alpha2: itc * prefer\_old: False * long\_pair: itc-itc * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### itc-itc\n\n\n* source group: Italic languages\n* target group: Italic languages\n* OPUS readme: itc-itc\n* model: transformer\n* source language(s): arg ast bjn cat cos egl fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lat\\_Grek lat\\_Latn lij lld\\_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm\\_Latn\n* target language(s): arg ast bjn cat cos egl fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lat\\_Grek lat\\_Latn lij lld\\_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.8, chr-F: 0.501\ntestset: URL, BLEU: 59.9, chr-F: 0.739\ntestset: URL, BLEU: 45.4, chr-F: 0.628\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 46.8, chr-F: 0.636\ntestset: URL, BLEU: 51.6, chr-F: 0.689\ntestset: URL, BLEU: 49.2, chr-F: 0.699\ntestset: URL, BLEU: 48.0, chr-F: 0.688\ntestset: URL, BLEU: 35.4, chr-F: 0.719\ntestset: URL, BLEU: 69.0, chr-F: 0.826\ntestset: URL, BLEU: 22.3, chr-F: 0.383\ntestset: URL, BLEU: 3.4, chr-F: 0.199\ntestset: URL, BLEU: 9.5, chr-F: 0.283\ntestset: URL, BLEU: 3.0, chr-F: 0.206\ntestset: URL, BLEU: 3.7, chr-F: 0.194\ntestset: URL, BLEU: 3.8, chr-F: 0.090\ntestset: URL, BLEU: 25.9, chr-F: 0.457\ntestset: URL, BLEU: 42.2, chr-F: 0.637\ntestset: URL, BLEU: 3.3, chr-F: 0.185\ntestset: URL, BLEU: 2.2, chr-F: 0.120\ntestset: URL, BLEU: 1.0, chr-F: 0.191\ntestset: URL, BLEU: 0.2, chr-F: 0.099\ntestset: URL, BLEU: 40.5, chr-F: 0.625\ntestset: URL, BLEU: 22.6, chr-F: 0.472\ntestset: URL, BLEU: 46.7, chr-F: 0.679\ntestset: URL, BLEU: 15.9, chr-F: 0.345\ntestset: URL, BLEU: 2.9, chr-F: 0.247\ntestset: URL, BLEU: 1.0, chr-F: 0.201\ntestset: URL, BLEU: 1.1, chr-F: 0.257\ntestset: URL, BLEU: 1.2, chr-F: 0.241\ntestset: URL, BLEU: 0.4, chr-F: 0.111\ntestset: URL, BLEU: 7.3, chr-F: 0.322\ntestset: URL, BLEU: 69.8, chr-F: 0.912\ntestset: URL, BLEU: 0.6, chr-F: 0.144\ntestset: URL, BLEU: 1.0, chr-F: 0.181\ntestset: URL, BLEU: 39.7, chr-F: 0.619\ntestset: URL, BLEU: 5.7, chr-F: 0.286\ntestset: URL, BLEU: 36.4, chr-F: 0.591\ntestset: URL, BLEU: 2.1, chr-F: 0.101\ntestset: URL, BLEU: 47.5, chr-F: 0.670\ntestset: URL, BLEU: 2.8, chr-F: 0.306\ntestset: URL, BLEU: 3.0, chr-F: 0.345\ntestset: URL, BLEU: 3.5, chr-F: 0.212\ntestset: URL, BLEU: 11.4, chr-F: 0.472\ntestset: URL, BLEU: 7.1, chr-F: 0.267\ntestset: URL, BLEU: 0.0, chr-F: 0.170\ntestset: URL, BLEU: 0.0, chr-F: 0.230\ntestset: URL, BLEU: 13.4, chr-F: 0.314\ntestset: URL, BLEU: 54.7, chr-F: 0.702\ntestset: URL, BLEU: 40.1, chr-F: 0.661\ntestset: URL, BLEU: 57.6, chr-F: 0.748\ntestset: URL, BLEU: 70.0, chr-F: 0.817\ntestset: URL, BLEU: 14.2, chr-F: 0.419\ntestset: URL, BLEU: 17.9, chr-F: 0.449\ntestset: URL, BLEU: 51.0, chr-F: 0.693\ntestset: URL, BLEU: 1.1, chr-F: 0.114\ntestset: URL, BLEU: 58.2, chr-F: 0.727\ntestset: URL, BLEU: 41.7, chr-F: 0.652\ntestset: URL, BLEU: 17.5, chr-F: 0.419\ntestset: URL, BLEU: 7.1, chr-F: 0.294\ntestset: URL, BLEU: 1.0, chr-F: 0.208\ntestset: URL, BLEU: 0.9, chr-F: 0.115\ntestset: URL, BLEU: 12.3, chr-F: 0.378\ntestset: URL, BLEU: 1.6, chr-F: 0.182\ntestset: URL, BLEU: 44.8, chr-F: 0.665\ntestset: URL, BLEU: 43.3, chr-F: 0.653\ntestset: URL, BLEU: 56.6, chr-F: 0.733\ntestset: URL, BLEU: 2.0, chr-F: 0.187\ntestset: URL, BLEU: 30.4, chr-F: 0.458\ntestset: URL, BLEU: 0.0, chr-F: 0.163\ntestset: URL, BLEU: 12.3, chr-F: 0.426\ntestset: URL, BLEU: 1.6, chr-F: 0.178\ntestset: URL, BLEU: 8.8, chr-F: 0.394\ntestset: URL, BLEU: 78.3, chr-F: 0.717\ntestset: URL, BLEU: 28.3, chr-F: 0.531\ntestset: URL, BLEU: 9.4, chr-F: 0.300\ntestset: URL, BLEU: 20.0, chr-F: 0.421\ntestset: URL, BLEU: 3.8, chr-F: 0.173\ntestset: URL, BLEU: 13.0, chr-F: 0.354\ntestset: URL, BLEU: 14.0, chr-F: 0.358\ntestset: URL, BLEU: 21.8, chr-F: 0.436\ntestset: URL, BLEU: 13.8, chr-F: 0.346\ntestset: URL, BLEU: 14.7, chr-F: 0.442\ntestset: URL, BLEU: 18.8, chr-F: 0.428\ntestset: URL, BLEU: 11.1, chr-F: 0.377\ntestset: URL, BLEU: 11.0, chr-F: 0.329\ntestset: URL, BLEU: 0.8, chr-F: 0.129\ntestset: URL, BLEU: 1.1, chr-F: 0.138\ntestset: URL, BLEU: 19.1, chr-F: 0.453\ntestset: URL, BLEU: 0.0, chr-F: 0.037\ntestset: URL, BLEU: 2.4, chr-F: 0.155\ntestset: URL, BLEU: 1.2, chr-F: 0.129\ntestset: URL, BLEU: 1.0, chr-F: 0.139\ntestset: URL, BLEU: 40.8, chr-F: 0.599\ntestset: URL, BLEU: 35.4, chr-F: 0.561\ntestset: URL, BLEU: 24.5, chr-F: 0.467\ntestset: URL, BLEU: 23.3, chr-F: 0.493\ntestset: URL, BLEU: 26.1, chr-F: 0.505\ntestset: URL, BLEU: 31.0, chr-F: 0.629\ntestset: URL, BLEU: 0.0, chr-F: 0.051\ntestset: URL, BLEU: 13.8, chr-F: 0.381\ntestset: URL, BLEU: 2.6, chr-F: 0.227\ntestset: URL, BLEU: 3.4, chr-F: 0.217\ntestset: URL, BLEU: 13.4, chr-F: 0.347\ntestset: URL, BLEU: 13.0, chr-F: 0.373\ntestset: URL, BLEU: 13.1, chr-F: 0.374\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 45.1, chr-F: 0.673\ntestset: URL, BLEU: 52.5, chr-F: 0.698\ntestset: URL, BLEU: 16.0, chr-F: 0.128\ntestset: URL, BLEU: 57.5, chr-F: 0.750\ntestset: URL, BLEU: 50.1, chr-F: 0.710\ntestset: URL, BLEU: 15.7, chr-F: 0.341\ntestset: URL, BLEU: 11.1, chr-F: 0.362\ntestset: URL, BLEU: 2.4, chr-F: 0.136\ntestset: URL, BLEU: 30.5, chr-F: 0.559\ntestset: URL, BLEU: 0.0, chr-F: 0.132\ntestset: URL, BLEU: 40.0, chr-F: 0.632\ntestset: URL, BLEU: 58.6, chr-F: 0.756\ntestset: URL, BLEU: 23.1, chr-F: 0.564\ntestset: URL, BLEU: 21.4, chr-F: 0.347\ntestset: URL, BLEU: 19.8, chr-F: 0.489\ntestset: URL, BLEU: 59.5, chr-F: 0.854\ntestset: URL, BLEU: 47.4, chr-F: 0.647\ntestset: URL, BLEU: 45.7, chr-F: 0.683\ntestset: URL, BLEU: 44.2, chr-F: 0.712\ntestset: URL, BLEU: 14.8, chr-F: 0.449\ntestset: URL, BLEU: 1.2, chr-F: 0.098\ntestset: URL, BLEU: 42.7, chr-F: 0.650\ntestset: URL, BLEU: 50.4, chr-F: 0.686\ntestset: URL, BLEU: 2.4, chr-F: 0.180\ntestset: URL, BLEU: 5.1, chr-F: 0.212\ntestset: URL, BLEU: 10.8, chr-F: 0.267\ntestset: URL, BLEU: 24.6, chr-F: 0.514\ntestset: URL, BLEU: 61.6, chr-F: 0.783\ntestset: URL, BLEU: 2.2, chr-F: 0.106\ntestset: URL, BLEU: 51.1, chr-F: 0.683\ntestset: URL, BLEU: 7.8, chr-F: 0.067\ntestset: URL, BLEU: 62.8, chr-F: 0.776\ntestset: URL, BLEU: 16.6, chr-F: 0.398\ntestset: URL, BLEU: 51.8, chr-F: 0.718\ntestset: URL, BLEU: 14.6, chr-F: 0.393\ntestset: URL, BLEU: 21.5, chr-F: 0.486\ntestset: URL, BLEU: 2.0, chr-F: 0.222\ntestset: URL, BLEU: 0.8, chr-F: 0.113\ntestset: URL, BLEU: 10.3, chr-F: 0.377\ntestset: URL, BLEU: 0.9, chr-F: 0.115\ntestset: URL, BLEU: 1.5, chr-F: 0.194\ntestset: URL, BLEU: 49.4, chr-F: 0.698\ntestset: URL, BLEU: 4.6, chr-F: 0.261\ntestset: URL, BLEU: 39.1, chr-F: 0.618\ntestset: URL, BLEU: 2.0, chr-F: 0.113\ntestset: URL, BLEU: 8.7, chr-F: 0.295\ntestset: URL, BLEU: 6.7, chr-F: 0.369\ntestset: URL, BLEU: 59.9, chr-F: 0.608\ntestset: URL, BLEU: 14.2, chr-F: 0.405\ntestset: URL, BLEU: 8.9, chr-F: 0.344\ntestset: URL, BLEU: 9.6, chr-F: 0.298", "### System Info:\n\n\n* hf\\_name: itc-itc\n* source\\_languages: itc\n* target\\_languages: itc\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']\n* src\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\\_Latn', 'lad\\_Latn', 'pcd', 'lat\\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* tgt\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\\_Latn', 'lad\\_Latn', 'pcd', 'lat\\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: itc\n* tgt\\_alpha3: itc\n* short\\_pair: itc-itc\n* chrF2\\_score: 0.599\n* bleu: 40.8\n* brevity\\_penalty: 0.968\n* ref\\_len: 77448.0\n* src\\_name: Italic languages\n* tgt\\_name: Italic languages\n* train\\_date: 2020-07-07\n* src\\_alpha2: itc\n* tgt\\_alpha2: itc\n* prefer\\_old: False\n* long\\_pair: itc-itc\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #ca #rm #es #ro #gl #sc #co #wa #pt #oc #an #id #fr #ht #itc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### itc-itc\n\n\n* source group: Italic languages\n* target group: Italic languages\n* OPUS readme: itc-itc\n* model: transformer\n* source language(s): arg ast bjn cat cos egl fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lat\\_Grek lat\\_Latn lij lld\\_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm\\_Latn\n* target language(s): arg ast bjn cat cos egl fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lat\\_Grek lat\\_Latn lij lld\\_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.8, chr-F: 0.501\ntestset: URL, BLEU: 59.9, chr-F: 0.739\ntestset: URL, BLEU: 45.4, chr-F: 0.628\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 46.8, chr-F: 0.636\ntestset: URL, BLEU: 51.6, chr-F: 0.689\ntestset: URL, BLEU: 49.2, chr-F: 0.699\ntestset: URL, BLEU: 48.0, chr-F: 0.688\ntestset: URL, BLEU: 35.4, chr-F: 0.719\ntestset: URL, BLEU: 69.0, chr-F: 0.826\ntestset: URL, BLEU: 22.3, chr-F: 0.383\ntestset: URL, BLEU: 3.4, chr-F: 0.199\ntestset: URL, BLEU: 9.5, chr-F: 0.283\ntestset: URL, BLEU: 3.0, chr-F: 0.206\ntestset: URL, BLEU: 3.7, chr-F: 0.194\ntestset: URL, BLEU: 3.8, chr-F: 0.090\ntestset: URL, BLEU: 25.9, chr-F: 0.457\ntestset: URL, BLEU: 42.2, chr-F: 0.637\ntestset: URL, BLEU: 3.3, chr-F: 0.185\ntestset: URL, BLEU: 2.2, chr-F: 0.120\ntestset: URL, BLEU: 1.0, chr-F: 0.191\ntestset: URL, BLEU: 0.2, chr-F: 0.099\ntestset: URL, BLEU: 40.5, chr-F: 0.625\ntestset: URL, BLEU: 22.6, chr-F: 0.472\ntestset: URL, BLEU: 46.7, chr-F: 0.679\ntestset: URL, BLEU: 15.9, chr-F: 0.345\ntestset: URL, BLEU: 2.9, chr-F: 0.247\ntestset: URL, BLEU: 1.0, chr-F: 0.201\ntestset: URL, BLEU: 1.1, chr-F: 0.257\ntestset: URL, BLEU: 1.2, chr-F: 0.241\ntestset: URL, BLEU: 0.4, chr-F: 0.111\ntestset: URL, BLEU: 7.3, chr-F: 0.322\ntestset: URL, BLEU: 69.8, chr-F: 0.912\ntestset: URL, BLEU: 0.6, chr-F: 0.144\ntestset: URL, BLEU: 1.0, chr-F: 0.181\ntestset: URL, BLEU: 39.7, chr-F: 0.619\ntestset: URL, BLEU: 5.7, chr-F: 0.286\ntestset: URL, BLEU: 36.4, chr-F: 0.591\ntestset: URL, BLEU: 2.1, chr-F: 0.101\ntestset: URL, BLEU: 47.5, chr-F: 0.670\ntestset: URL, BLEU: 2.8, chr-F: 0.306\ntestset: URL, BLEU: 3.0, chr-F: 0.345\ntestset: URL, BLEU: 3.5, chr-F: 0.212\ntestset: URL, BLEU: 11.4, chr-F: 0.472\ntestset: URL, BLEU: 7.1, chr-F: 0.267\ntestset: URL, BLEU: 0.0, chr-F: 0.170\ntestset: URL, BLEU: 0.0, chr-F: 0.230\ntestset: URL, BLEU: 13.4, chr-F: 0.314\ntestset: URL, BLEU: 54.7, chr-F: 0.702\ntestset: URL, BLEU: 40.1, chr-F: 0.661\ntestset: URL, BLEU: 57.6, chr-F: 0.748\ntestset: URL, BLEU: 70.0, chr-F: 0.817\ntestset: URL, BLEU: 14.2, chr-F: 0.419\ntestset: URL, BLEU: 17.9, chr-F: 0.449\ntestset: URL, BLEU: 51.0, chr-F: 0.693\ntestset: URL, BLEU: 1.1, chr-F: 0.114\ntestset: URL, BLEU: 58.2, chr-F: 0.727\ntestset: URL, BLEU: 41.7, chr-F: 0.652\ntestset: URL, BLEU: 17.5, chr-F: 0.419\ntestset: URL, BLEU: 7.1, chr-F: 0.294\ntestset: URL, BLEU: 1.0, chr-F: 0.208\ntestset: URL, BLEU: 0.9, chr-F: 0.115\ntestset: URL, BLEU: 12.3, chr-F: 0.378\ntestset: URL, BLEU: 1.6, chr-F: 0.182\ntestset: URL, BLEU: 44.8, chr-F: 0.665\ntestset: URL, BLEU: 43.3, chr-F: 0.653\ntestset: URL, BLEU: 56.6, chr-F: 0.733\ntestset: URL, BLEU: 2.0, chr-F: 0.187\ntestset: URL, BLEU: 30.4, chr-F: 0.458\ntestset: URL, BLEU: 0.0, chr-F: 0.163\ntestset: URL, BLEU: 12.3, chr-F: 0.426\ntestset: URL, BLEU: 1.6, chr-F: 0.178\ntestset: URL, BLEU: 8.8, chr-F: 0.394\ntestset: URL, BLEU: 78.3, chr-F: 0.717\ntestset: URL, BLEU: 28.3, chr-F: 0.531\ntestset: URL, BLEU: 9.4, chr-F: 0.300\ntestset: URL, BLEU: 20.0, chr-F: 0.421\ntestset: URL, BLEU: 3.8, chr-F: 0.173\ntestset: URL, BLEU: 13.0, chr-F: 0.354\ntestset: URL, BLEU: 14.0, chr-F: 0.358\ntestset: URL, BLEU: 21.8, chr-F: 0.436\ntestset: URL, BLEU: 13.8, chr-F: 0.346\ntestset: URL, BLEU: 14.7, chr-F: 0.442\ntestset: URL, BLEU: 18.8, chr-F: 0.428\ntestset: URL, BLEU: 11.1, chr-F: 0.377\ntestset: URL, BLEU: 11.0, chr-F: 0.329\ntestset: URL, BLEU: 0.8, chr-F: 0.129\ntestset: URL, BLEU: 1.1, chr-F: 0.138\ntestset: URL, BLEU: 19.1, chr-F: 0.453\ntestset: URL, BLEU: 0.0, chr-F: 0.037\ntestset: URL, BLEU: 2.4, chr-F: 0.155\ntestset: URL, BLEU: 1.2, chr-F: 0.129\ntestset: URL, BLEU: 1.0, chr-F: 0.139\ntestset: URL, BLEU: 40.8, chr-F: 0.599\ntestset: URL, BLEU: 35.4, chr-F: 0.561\ntestset: URL, BLEU: 24.5, chr-F: 0.467\ntestset: URL, BLEU: 23.3, chr-F: 0.493\ntestset: URL, BLEU: 26.1, chr-F: 0.505\ntestset: URL, BLEU: 31.0, chr-F: 0.629\ntestset: URL, BLEU: 0.0, chr-F: 0.051\ntestset: URL, BLEU: 13.8, chr-F: 0.381\ntestset: URL, BLEU: 2.6, chr-F: 0.227\ntestset: URL, BLEU: 3.4, chr-F: 0.217\ntestset: URL, BLEU: 13.4, chr-F: 0.347\ntestset: URL, BLEU: 13.0, chr-F: 0.373\ntestset: URL, BLEU: 13.1, chr-F: 0.374\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 45.1, chr-F: 0.673\ntestset: URL, BLEU: 52.5, chr-F: 0.698\ntestset: URL, BLEU: 16.0, chr-F: 0.128\ntestset: URL, BLEU: 57.5, chr-F: 0.750\ntestset: URL, BLEU: 50.1, chr-F: 0.710\ntestset: URL, BLEU: 15.7, chr-F: 0.341\ntestset: URL, BLEU: 11.1, chr-F: 0.362\ntestset: URL, BLEU: 2.4, chr-F: 0.136\ntestset: URL, BLEU: 30.5, chr-F: 0.559\ntestset: URL, BLEU: 0.0, chr-F: 0.132\ntestset: URL, BLEU: 40.0, chr-F: 0.632\ntestset: URL, BLEU: 58.6, chr-F: 0.756\ntestset: URL, BLEU: 23.1, chr-F: 0.564\ntestset: URL, BLEU: 21.4, chr-F: 0.347\ntestset: URL, BLEU: 19.8, chr-F: 0.489\ntestset: URL, BLEU: 59.5, chr-F: 0.854\ntestset: URL, BLEU: 47.4, chr-F: 0.647\ntestset: URL, BLEU: 45.7, chr-F: 0.683\ntestset: URL, BLEU: 44.2, chr-F: 0.712\ntestset: URL, BLEU: 14.8, chr-F: 0.449\ntestset: URL, BLEU: 1.2, chr-F: 0.098\ntestset: URL, BLEU: 42.7, chr-F: 0.650\ntestset: URL, BLEU: 50.4, chr-F: 0.686\ntestset: URL, BLEU: 2.4, chr-F: 0.180\ntestset: URL, BLEU: 5.1, chr-F: 0.212\ntestset: URL, BLEU: 10.8, chr-F: 0.267\ntestset: URL, BLEU: 24.6, chr-F: 0.514\ntestset: URL, BLEU: 61.6, chr-F: 0.783\ntestset: URL, BLEU: 2.2, chr-F: 0.106\ntestset: URL, BLEU: 51.1, chr-F: 0.683\ntestset: URL, BLEU: 7.8, chr-F: 0.067\ntestset: URL, BLEU: 62.8, chr-F: 0.776\ntestset: URL, BLEU: 16.6, chr-F: 0.398\ntestset: URL, BLEU: 51.8, chr-F: 0.718\ntestset: URL, BLEU: 14.6, chr-F: 0.393\ntestset: URL, BLEU: 21.5, chr-F: 0.486\ntestset: URL, BLEU: 2.0, chr-F: 0.222\ntestset: URL, BLEU: 0.8, chr-F: 0.113\ntestset: URL, BLEU: 10.3, chr-F: 0.377\ntestset: URL, BLEU: 0.9, chr-F: 0.115\ntestset: URL, BLEU: 1.5, chr-F: 0.194\ntestset: URL, BLEU: 49.4, chr-F: 0.698\ntestset: URL, BLEU: 4.6, chr-F: 0.261\ntestset: URL, BLEU: 39.1, chr-F: 0.618\ntestset: URL, BLEU: 2.0, chr-F: 0.113\ntestset: URL, BLEU: 8.7, chr-F: 0.295\ntestset: URL, BLEU: 6.7, chr-F: 0.369\ntestset: URL, BLEU: 59.9, chr-F: 0.608\ntestset: URL, BLEU: 14.2, chr-F: 0.405\ntestset: URL, BLEU: 8.9, chr-F: 0.344\ntestset: URL, BLEU: 9.6, chr-F: 0.298", "### System Info:\n\n\n* hf\\_name: itc-itc\n* source\\_languages: itc\n* target\\_languages: itc\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']\n* src\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\\_Latn', 'lad\\_Latn', 'pcd', 'lat\\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* tgt\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\\_Latn', 'lad\\_Latn', 'pcd', 'lat\\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: itc\n* tgt\\_alpha3: itc\n* short\\_pair: itc-itc\n* chrF2\\_score: 0.599\n* bleu: 40.8\n* brevity\\_penalty: 0.968\n* ref\\_len: 77448.0\n* src\\_name: Italic languages\n* tgt\\_name: Italic languages\n* train\\_date: 2020-07-07\n* src\\_alpha2: itc\n* tgt\\_alpha2: itc\n* prefer\\_old: False\n* long\\_pair: itc-itc\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 83, 3888, 942 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #it #ca #rm #es #ro #gl #sc #co #wa #pt #oc #an #id #fr #ht #itc #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### itc-itc\n\n\n* source group: Italic languages\n* target group: Italic languages\n* OPUS readme: itc-itc\n* model: transformer\n* source language(s): arg ast bjn cat cos egl fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lat\\_Grek lat\\_Latn lij lld\\_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm\\_Latn\n* target language(s): arg ast bjn cat cos egl fra frm\\_Latn gcf\\_Latn glg hat ind ita lad lad\\_Latn lat\\_Grek lat\\_Latn lij lld\\_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm\\_Latn\n* model: transformer\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 40.8, chr-F: 0.501\ntestset: URL, BLEU: 59.9, chr-F: 0.739\ntestset: URL, BLEU: 45.4, chr-F: 0.628\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 46.8, chr-F: 0.636\ntestset: URL, BLEU: 51.6, chr-F: 0.689\ntestset: URL, BLEU: 49.2, chr-F: 0.699\ntestset: URL, BLEU: 48.0, chr-F: 0.688\ntestset: URL, BLEU: 35.4, chr-F: 0.719\ntestset: URL, BLEU: 69.0, chr-F: 0.826\ntestset: URL, BLEU: 22.3, chr-F: 0.383\ntestset: URL, BLEU: 3.4, chr-F: 0.199\ntestset: URL, BLEU: 9.5, chr-F: 0.283\ntestset: URL, BLEU: 3.0, chr-F: 0.206\ntestset: URL, BLEU: 3.7, chr-F: 0.194\ntestset: URL, BLEU: 3.8, chr-F: 0.090\ntestset: URL, BLEU: 25.9, chr-F: 0.457\ntestset: URL, BLEU: 42.2, chr-F: 0.637\ntestset: URL, BLEU: 3.3, chr-F: 0.185\ntestset: URL, BLEU: 2.2, chr-F: 0.120\ntestset: URL, BLEU: 1.0, chr-F: 0.191\ntestset: URL, BLEU: 0.2, chr-F: 0.099\ntestset: URL, BLEU: 40.5, chr-F: 0.625\ntestset: URL, BLEU: 22.6, chr-F: 0.472\ntestset: URL, BLEU: 46.7, chr-F: 0.679\ntestset: URL, BLEU: 15.9, chr-F: 0.345\ntestset: URL, BLEU: 2.9, chr-F: 0.247\ntestset: URL, BLEU: 1.0, chr-F: 0.201\ntestset: URL, BLEU: 1.1, chr-F: 0.257\ntestset: URL, BLEU: 1.2, chr-F: 0.241\ntestset: URL, BLEU: 0.4, chr-F: 0.111\ntestset: URL, BLEU: 7.3, chr-F: 0.322\ntestset: URL, BLEU: 69.8, chr-F: 0.912\ntestset: URL, BLEU: 0.6, chr-F: 0.144\ntestset: URL, BLEU: 1.0, chr-F: 0.181\ntestset: URL, BLEU: 39.7, chr-F: 0.619\ntestset: URL, BLEU: 5.7, chr-F: 0.286\ntestset: URL, BLEU: 36.4, chr-F: 0.591\ntestset: URL, BLEU: 2.1, chr-F: 0.101\ntestset: URL, BLEU: 47.5, chr-F: 0.670\ntestset: URL, BLEU: 2.8, chr-F: 0.306\ntestset: URL, BLEU: 3.0, chr-F: 0.345\ntestset: URL, BLEU: 3.5, chr-F: 0.212\ntestset: URL, BLEU: 11.4, chr-F: 0.472\ntestset: URL, BLEU: 7.1, chr-F: 0.267\ntestset: URL, BLEU: 0.0, chr-F: 0.170\ntestset: URL, BLEU: 0.0, chr-F: 0.230\ntestset: URL, BLEU: 13.4, chr-F: 0.314\ntestset: URL, BLEU: 54.7, chr-F: 0.702\ntestset: URL, BLEU: 40.1, chr-F: 0.661\ntestset: URL, BLEU: 57.6, chr-F: 0.748\ntestset: URL, BLEU: 70.0, chr-F: 0.817\ntestset: URL, BLEU: 14.2, chr-F: 0.419\ntestset: URL, BLEU: 17.9, chr-F: 0.449\ntestset: URL, BLEU: 51.0, chr-F: 0.693\ntestset: URL, BLEU: 1.1, chr-F: 0.114\ntestset: URL, BLEU: 58.2, chr-F: 0.727\ntestset: URL, BLEU: 41.7, chr-F: 0.652\ntestset: URL, BLEU: 17.5, chr-F: 0.419\ntestset: URL, BLEU: 7.1, chr-F: 0.294\ntestset: URL, BLEU: 1.0, chr-F: 0.208\ntestset: URL, BLEU: 0.9, chr-F: 0.115\ntestset: URL, BLEU: 12.3, chr-F: 0.378\ntestset: URL, BLEU: 1.6, chr-F: 0.182\ntestset: URL, BLEU: 44.8, chr-F: 0.665\ntestset: URL, BLEU: 43.3, chr-F: 0.653\ntestset: URL, BLEU: 56.6, chr-F: 0.733\ntestset: URL, BLEU: 2.0, chr-F: 0.187\ntestset: URL, BLEU: 30.4, chr-F: 0.458\ntestset: URL, BLEU: 0.0, chr-F: 0.163\ntestset: URL, BLEU: 12.3, chr-F: 0.426\ntestset: URL, BLEU: 1.6, chr-F: 0.178\ntestset: URL, BLEU: 8.8, chr-F: 0.394\ntestset: URL, BLEU: 78.3, chr-F: 0.717\ntestset: URL, BLEU: 28.3, chr-F: 0.531\ntestset: URL, BLEU: 9.4, chr-F: 0.300\ntestset: URL, BLEU: 20.0, chr-F: 0.421\ntestset: URL, BLEU: 3.8, chr-F: 0.173\ntestset: URL, BLEU: 13.0, chr-F: 0.354\ntestset: URL, BLEU: 14.0, chr-F: 0.358\ntestset: URL, BLEU: 21.8, chr-F: 0.436\ntestset: URL, BLEU: 13.8, chr-F: 0.346\ntestset: URL, BLEU: 14.7, chr-F: 0.442\ntestset: URL, BLEU: 18.8, chr-F: 0.428\ntestset: URL, BLEU: 11.1, chr-F: 0.377\ntestset: URL, BLEU: 11.0, chr-F: 0.329\ntestset: URL, BLEU: 0.8, chr-F: 0.129\ntestset: URL, BLEU: 1.1, chr-F: 0.138\ntestset: URL, BLEU: 19.1, chr-F: 0.453\ntestset: URL, BLEU: 0.0, chr-F: 0.037\ntestset: URL, BLEU: 2.4, chr-F: 0.155\ntestset: URL, BLEU: 1.2, chr-F: 0.129\ntestset: URL, BLEU: 1.0, chr-F: 0.139\ntestset: URL, BLEU: 40.8, chr-F: 0.599\ntestset: URL, BLEU: 35.4, chr-F: 0.561\ntestset: URL, BLEU: 24.5, chr-F: 0.467\ntestset: URL, BLEU: 23.3, chr-F: 0.493\ntestset: URL, BLEU: 26.1, chr-F: 0.505\ntestset: URL, BLEU: 31.0, chr-F: 0.629\ntestset: URL, BLEU: 0.0, chr-F: 0.051\ntestset: URL, BLEU: 13.8, chr-F: 0.381\ntestset: URL, BLEU: 2.6, chr-F: 0.227\ntestset: URL, BLEU: 3.4, chr-F: 0.217\ntestset: URL, BLEU: 13.4, chr-F: 0.347\ntestset: URL, BLEU: 13.0, chr-F: 0.373\ntestset: URL, BLEU: 13.1, chr-F: 0.374\ntestset: URL, BLEU: 100.0, chr-F: 1.000\ntestset: URL, BLEU: 45.1, chr-F: 0.673\ntestset: URL, BLEU: 52.5, chr-F: 0.698\ntestset: URL, BLEU: 16.0, chr-F: 0.128\ntestset: URL, BLEU: 57.5, chr-F: 0.750\ntestset: URL, BLEU: 50.1, chr-F: 0.710\ntestset: URL, BLEU: 15.7, chr-F: 0.341\ntestset: URL, BLEU: 11.1, chr-F: 0.362\ntestset: URL, BLEU: 2.4, chr-F: 0.136\ntestset: URL, BLEU: 30.5, chr-F: 0.559\ntestset: URL, BLEU: 0.0, chr-F: 0.132\ntestset: URL, BLEU: 40.0, chr-F: 0.632\ntestset: URL, BLEU: 58.6, chr-F: 0.756\ntestset: URL, BLEU: 23.1, chr-F: 0.564\ntestset: URL, BLEU: 21.4, chr-F: 0.347\ntestset: URL, BLEU: 19.8, chr-F: 0.489\ntestset: URL, BLEU: 59.5, chr-F: 0.854\ntestset: URL, BLEU: 47.4, chr-F: 0.647\ntestset: URL, BLEU: 45.7, chr-F: 0.683\ntestset: URL, BLEU: 44.2, chr-F: 0.712\ntestset: URL, BLEU: 14.8, chr-F: 0.449\ntestset: URL, BLEU: 1.2, chr-F: 0.098\ntestset: URL, BLEU: 42.7, chr-F: 0.650\ntestset: URL, BLEU: 50.4, chr-F: 0.686\ntestset: URL, BLEU: 2.4, chr-F: 0.180\ntestset: URL, BLEU: 5.1, chr-F: 0.212\ntestset: URL, BLEU: 10.8, chr-F: 0.267\ntestset: URL, BLEU: 24.6, chr-F: 0.514\ntestset: URL, BLEU: 61.6, chr-F: 0.783\ntestset: URL, BLEU: 2.2, chr-F: 0.106\ntestset: URL, BLEU: 51.1, chr-F: 0.683\ntestset: URL, BLEU: 7.8, chr-F: 0.067\ntestset: URL, BLEU: 62.8, chr-F: 0.776\ntestset: URL, BLEU: 16.6, chr-F: 0.398\ntestset: URL, BLEU: 51.8, chr-F: 0.718\ntestset: URL, BLEU: 14.6, chr-F: 0.393\ntestset: URL, BLEU: 21.5, chr-F: 0.486\ntestset: URL, BLEU: 2.0, chr-F: 0.222\ntestset: URL, BLEU: 0.8, chr-F: 0.113\ntestset: URL, BLEU: 10.3, chr-F: 0.377\ntestset: URL, BLEU: 0.9, chr-F: 0.115\ntestset: URL, BLEU: 1.5, chr-F: 0.194\ntestset: URL, BLEU: 49.4, chr-F: 0.698\ntestset: URL, BLEU: 4.6, chr-F: 0.261\ntestset: URL, BLEU: 39.1, chr-F: 0.618\ntestset: URL, BLEU: 2.0, chr-F: 0.113\ntestset: URL, BLEU: 8.7, chr-F: 0.295\ntestset: URL, BLEU: 6.7, chr-F: 0.369\ntestset: URL, BLEU: 59.9, chr-F: 0.608\ntestset: URL, BLEU: 14.2, chr-F: 0.405\ntestset: URL, BLEU: 8.9, chr-F: 0.344\ntestset: URL, BLEU: 9.6, chr-F: 0.298### System Info:\n\n\n* hf\\_name: itc-itc\n* source\\_languages: itc\n* target\\_languages: itc\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc']\n* src\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\\_Latn', 'lad\\_Latn', 'pcd', 'lat\\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* tgt\\_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat\\_Latn', 'lad\\_Latn', 'pcd', 'lat\\_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm\\_Latn', 'srd', 'gcf\\_Latn', 'lld\\_Latn', 'min', 'tmw\\_Latn', 'cos', 'wln', 'zlm\\_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max\\_Latn', 'frm\\_Latn', 'scn', 'mfe'}\n* src\\_multilingual: True\n* tgt\\_multilingual: True\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: itc\n* tgt\\_alpha3: itc\n* short\\_pair: itc-itc\n* chrF2\\_score: 0.599\n* bleu: 40.8\n* brevity\\_penalty: 0.968\n* ref\\_len: 77448.0\n* src\\_name: Italic languages\n* tgt\\_name: Italic languages\n* train\\_date: 2020-07-07\n* src\\_alpha2: itc\n* tgt\\_alpha2: itc\n* prefer\\_old: False\n* long\\_pair: itc-itc\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### jpn-ara * source group: Japanese * target group: Arabic * OPUS readme: [jpn-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-ara/README.md) * model: transformer-align * source language(s): jpn_Hani jpn_Hira jpn_Kana * target language(s): acm apc ara arq arz * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ara/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ara/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ara/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.ara | 11.6 | 0.394 | ### System Info: - hf_name: jpn-ara - source_languages: jpn - target_languages: ara - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-ara/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'ar'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ara/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ara/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: ara - short_pair: ja-ar - chrF2_score: 0.39399999999999996 - bleu: 11.6 - brevity_penalty: 1.0 - ref_len: 7089.0 - src_name: Japanese - tgt_name: Arabic - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: ar - prefer_old: False - long_pair: jpn-ara - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "ar"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-ar
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "ar" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-ara * source group: Japanese * target group: Arabic * OPUS readme: jpn-ara * model: transformer-align * source language(s): jpn\_Hani jpn\_Hira jpn\_Kana * target language(s): acm apc ara arq arz * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 11.6, chr-F: 0.394 ### System Info: * hf\_name: jpn-ara * source\_languages: jpn * target\_languages: ara * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'ar'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'apc', 'ara', 'arq\_Latn', 'arq', 'afb', 'ara\_Latn', 'apc\_Latn', 'arz'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: ara * short\_pair: ja-ar * chrF2\_score: 0.39399999999999996 * bleu: 11.6 * brevity\_penalty: 1.0 * ref\_len: 7089.0 * src\_name: Japanese * tgt\_name: Arabic * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: ar * prefer\_old: False * long\_pair: jpn-ara * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-ara\n\n\n* source group: Japanese\n* target group: Arabic\n* OPUS readme: jpn-ara\n* model: transformer-align\n* source language(s): jpn\\_Hani jpn\\_Hira jpn\\_Kana\n* target language(s): acm apc ara arq arz\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.6, chr-F: 0.394", "### System Info:\n\n\n* hf\\_name: jpn-ara\n* source\\_languages: jpn\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'ar']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: ara\n* short\\_pair: ja-ar\n* chrF2\\_score: 0.39399999999999996\n* bleu: 11.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 7089.0\n* src\\_name: Japanese\n* tgt\\_name: Arabic\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: jpn-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-ara\n\n\n* source group: Japanese\n* target group: Arabic\n* OPUS readme: jpn-ara\n* model: transformer-align\n* source language(s): jpn\\_Hani jpn\\_Hira jpn\\_Kana\n* target language(s): acm apc ara arq arz\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.6, chr-F: 0.394", "### System Info:\n\n\n* hf\\_name: jpn-ara\n* source\\_languages: jpn\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'ar']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: ara\n* short\\_pair: ja-ar\n* chrF2\\_score: 0.39399999999999996\n* bleu: 11.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 7089.0\n* src\\_name: Japanese\n* tgt\\_name: Arabic\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: jpn-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 185, 520 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-ara\n\n\n* source group: Japanese\n* target group: Arabic\n* OPUS readme: jpn-ara\n* model: transformer-align\n* source language(s): jpn\\_Hani jpn\\_Hira jpn\\_Kana\n* target language(s): acm apc ara arq arz\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 11.6, chr-F: 0.394### System Info:\n\n\n* hf\\_name: jpn-ara\n* source\\_languages: jpn\n* target\\_languages: ara\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'ar']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'apc', 'ara', 'arq\\_Latn', 'arq', 'afb', 'ara\\_Latn', 'apc\\_Latn', 'arz'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: ara\n* short\\_pair: ja-ar\n* chrF2\\_score: 0.39399999999999996\n* bleu: 11.6\n* brevity\\_penalty: 1.0\n* ref\\_len: 7089.0\n* src\\_name: Japanese\n* tgt\\_name: Arabic\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: ar\n* prefer\\_old: False\n* long\\_pair: jpn-ara\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### jpn-bul * source group: Japanese * target group: Bulgarian * OPUS readme: [jpn-bul](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-bul/README.md) * model: transformer-align * source language(s): jpn jpn_Hani jpn_Hira jpn_Kana * target language(s): bul * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.bul | 20.2 | 0.422 | ### System Info: - hf_name: jpn-bul - source_languages: jpn - target_languages: bul - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-bul/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'bg'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'bul', 'bul_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-bul/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: bul - short_pair: ja-bg - chrF2_score: 0.42200000000000004 - bleu: 20.2 - brevity_penalty: 0.9570000000000001 - ref_len: 2346.0 - src_name: Japanese - tgt_name: Bulgarian - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: bg - prefer_old: False - long_pair: jpn-bul - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "bg"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-bg
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "bg", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "bg" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-bul * source group: Japanese * target group: Bulgarian * OPUS readme: jpn-bul * model: transformer-align * source language(s): jpn jpn\_Hani jpn\_Hira jpn\_Kana * target language(s): bul * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.2, chr-F: 0.422 ### System Info: * hf\_name: jpn-bul * source\_languages: jpn * target\_languages: bul * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'bg'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'bul', 'bul\_Latn'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: bul * short\_pair: ja-bg * chrF2\_score: 0.42200000000000004 * bleu: 20.2 * brevity\_penalty: 0.9570000000000001 * ref\_len: 2346.0 * src\_name: Japanese * tgt\_name: Bulgarian * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: bg * prefer\_old: False * long\_pair: jpn-bul * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-bul\n\n\n* source group: Japanese\n* target group: Bulgarian\n* OPUS readme: jpn-bul\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana\n* target language(s): bul\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.422", "### System Info:\n\n\n* hf\\_name: jpn-bul\n* source\\_languages: jpn\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'bg']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: bul\n* short\\_pair: ja-bg\n* chrF2\\_score: 0.42200000000000004\n* bleu: 20.2\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 2346.0\n* src\\_name: Japanese\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: jpn-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-bul\n\n\n* source group: Japanese\n* target group: Bulgarian\n* OPUS readme: jpn-bul\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana\n* target language(s): bul\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.422", "### System Info:\n\n\n* hf\\_name: jpn-bul\n* source\\_languages: jpn\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'bg']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: bul\n* short\\_pair: ja-bg\n* chrF2\\_score: 0.42200000000000004\n* bleu: 20.2\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 2346.0\n* src\\_name: Japanese\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: jpn-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 52, 154, 490 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #bg #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-bul\n\n\n* source group: Japanese\n* target group: Bulgarian\n* OPUS readme: jpn-bul\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana\n* target language(s): bul\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.422### System Info:\n\n\n* hf\\_name: jpn-bul\n* source\\_languages: jpn\n* target\\_languages: bul\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'bg']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'bul', 'bul\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: bul\n* short\\_pair: ja-bg\n* chrF2\\_score: 0.42200000000000004\n* bleu: 20.2\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 2346.0\n* src\\_name: Japanese\n* tgt\\_name: Bulgarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: bg\n* prefer\\_old: False\n* long\\_pair: jpn-bul\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### jpn-dan * source group: Japanese * target group: Danish * OPUS readme: [jpn-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-dan/README.md) * model: transformer-align * source language(s): jpn_Hani jpn_Hira jpn_Kana jpn_Latn jpn_Yiii * target language(s): dan * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-dan/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-dan/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-dan/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.dan | 43.2 | 0.590 | ### System Info: - hf_name: jpn-dan - source_languages: jpn - target_languages: dan - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-dan/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'da'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'dan'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-dan/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-dan/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: dan - short_pair: ja-da - chrF2_score: 0.59 - bleu: 43.2 - brevity_penalty: 0.972 - ref_len: 5823.0 - src_name: Japanese - tgt_name: Danish - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: da - prefer_old: False - long_pair: jpn-dan - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "da"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-da
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "da", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "da" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-dan * source group: Japanese * target group: Danish * OPUS readme: jpn-dan * model: transformer-align * source language(s): jpn\_Hani jpn\_Hira jpn\_Kana jpn\_Latn jpn\_Yiii * target language(s): dan * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 43.2, chr-F: 0.590 ### System Info: * hf\_name: jpn-dan * source\_languages: jpn * target\_languages: dan * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'da'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'dan'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: dan * short\_pair: ja-da * chrF2\_score: 0.59 * bleu: 43.2 * brevity\_penalty: 0.972 * ref\_len: 5823.0 * src\_name: Japanese * tgt\_name: Danish * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: da * prefer\_old: False * long\_pair: jpn-dan * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-dan\n\n\n* source group: Japanese\n* target group: Danish\n* OPUS readme: jpn-dan\n* model: transformer-align\n* source language(s): jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): dan\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.2, chr-F: 0.590", "### System Info:\n\n\n* hf\\_name: jpn-dan\n* source\\_languages: jpn\n* target\\_languages: dan\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'da']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'dan'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: dan\n* short\\_pair: ja-da\n* chrF2\\_score: 0.59\n* bleu: 43.2\n* brevity\\_penalty: 0.972\n* ref\\_len: 5823.0\n* src\\_name: Japanese\n* tgt\\_name: Danish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: da\n* prefer\\_old: False\n* long\\_pair: jpn-dan\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-dan\n\n\n* source group: Japanese\n* target group: Danish\n* OPUS readme: jpn-dan\n* model: transformer-align\n* source language(s): jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): dan\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.2, chr-F: 0.590", "### System Info:\n\n\n* hf\\_name: jpn-dan\n* source\\_languages: jpn\n* target\\_languages: dan\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'da']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'dan'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: dan\n* short\\_pair: ja-da\n* chrF2\\_score: 0.59\n* bleu: 43.2\n* brevity\\_penalty: 0.972\n* ref\\_len: 5823.0\n* src\\_name: Japanese\n* tgt\\_name: Danish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: da\n* prefer\\_old: False\n* long\\_pair: jpn-dan\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 162, 458 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-dan\n\n\n* source group: Japanese\n* target group: Danish\n* OPUS readme: jpn-dan\n* model: transformer-align\n* source language(s): jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): dan\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 43.2, chr-F: 0.590### System Info:\n\n\n* hf\\_name: jpn-dan\n* source\\_languages: jpn\n* target\\_languages: dan\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'da']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'dan'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: dan\n* short\\_pair: ja-da\n* chrF2\\_score: 0.59\n* bleu: 43.2\n* brevity\\_penalty: 0.972\n* ref\\_len: 5823.0\n* src\\_name: Japanese\n* tgt\\_name: Danish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: da\n* prefer\\_old: False\n* long\\_pair: jpn-dan\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-ja-de * source languages: ja * target languages: de * OPUS readme: [ja-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-de/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-de/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-de/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ja.de | 30.1 | 0.518 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ja-de * source languages: ja * target languages: de * OPUS readme: ja-de * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 30.1, chr-F: 0.518
[ "### opus-mt-ja-de\n\n\n* source languages: ja\n* target languages: de\n* OPUS readme: ja-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.1, chr-F: 0.518" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ja-de\n\n\n* source languages: ja\n* target languages: de\n* OPUS readme: ja-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.1, chr-F: 0.518" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ja-de\n\n\n* source languages: ja\n* target languages: de\n* OPUS readme: ja-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.1, chr-F: 0.518" ]
translation
transformers
### opus-mt-ja-en * source languages: ja * target languages: en * OPUS readme: [ja-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ja.en | 41.7 | 0.589 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ja-en * source languages: ja * target languages: en * OPUS readme: ja-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 41.7, chr-F: 0.589
[ "### opus-mt-ja-en\n\n\n* source languages: ja\n* target languages: en\n* OPUS readme: ja-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.7, chr-F: 0.589" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ja-en\n\n\n* source languages: ja\n* target languages: en\n* OPUS readme: ja-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.7, chr-F: 0.589" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ja-en\n\n\n* source languages: ja\n* target languages: en\n* OPUS readme: ja-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.7, chr-F: 0.589" ]
translation
transformers
### opus-mt-ja-es * source languages: ja * target languages: es * OPUS readme: [ja-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ja.es | 34.6 | 0.553 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ja-es * source languages: ja * target languages: es * OPUS readme: ja-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 34.6, chr-F: 0.553
[ "### opus-mt-ja-es\n\n\n* source languages: ja\n* target languages: es\n* OPUS readme: ja-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.6, chr-F: 0.553" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ja-es\n\n\n* source languages: ja\n* target languages: es\n* OPUS readme: ja-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.6, chr-F: 0.553" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ja-es\n\n\n* source languages: ja\n* target languages: es\n* OPUS readme: ja-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.6, chr-F: 0.553" ]
translation
transformers
### opus-mt-ja-fi * source languages: ja * target languages: fi * OPUS readme: [ja-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-fi/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ja.fi | 21.2 | 0.448 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ja-fi * source languages: ja * target languages: fi * OPUS readme: ja-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 21.2, chr-F: 0.448
[ "### opus-mt-ja-fi\n\n\n* source languages: ja\n* target languages: fi\n* OPUS readme: ja-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.2, chr-F: 0.448" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ja-fi\n\n\n* source languages: ja\n* target languages: fi\n* OPUS readme: ja-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.2, chr-F: 0.448" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ja-fi\n\n\n* source languages: ja\n* target languages: fi\n* OPUS readme: ja-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.2, chr-F: 0.448" ]
translation
transformers
### opus-mt-ja-fr * source languages: ja * target languages: fr * OPUS readme: [ja-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ja.fr | 33.6 | 0.534 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ja-fr * source languages: ja * target languages: fr * OPUS readme: ja-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 33.6, chr-F: 0.534
[ "### opus-mt-ja-fr\n\n\n* source languages: ja\n* target languages: fr\n* OPUS readme: ja-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.6, chr-F: 0.534" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ja-fr\n\n\n* source languages: ja\n* target languages: fr\n* OPUS readme: ja-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.6, chr-F: 0.534" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ja-fr\n\n\n* source languages: ja\n* target languages: fr\n* OPUS readme: ja-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 33.6, chr-F: 0.534" ]
translation
transformers
### jpn-heb * source group: Japanese * target group: Hebrew * OPUS readme: [jpn-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-heb/README.md) * model: transformer-align * source language(s): jpn_Hani jpn_Hira jpn_Kana * target language(s): heb * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-heb/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-heb/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-heb/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.heb | 20.2 | 0.397 | ### System Info: - hf_name: jpn-heb - source_languages: jpn - target_languages: heb - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-heb/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'he'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'heb'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-heb/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-heb/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: heb - short_pair: ja-he - chrF2_score: 0.397 - bleu: 20.2 - brevity_penalty: 1.0 - ref_len: 1598.0 - src_name: Japanese - tgt_name: Hebrew - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: he - prefer_old: False - long_pair: jpn-heb - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "he"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-he
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "he", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "he" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-heb * source group: Japanese * target group: Hebrew * OPUS readme: jpn-heb * model: transformer-align * source language(s): jpn\_Hani jpn\_Hira jpn\_Kana * target language(s): heb * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.2, chr-F: 0.397 ### System Info: * hf\_name: jpn-heb * source\_languages: jpn * target\_languages: heb * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'he'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'heb'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: heb * short\_pair: ja-he * chrF2\_score: 0.397 * bleu: 20.2 * brevity\_penalty: 1.0 * ref\_len: 1598.0 * src\_name: Japanese * tgt\_name: Hebrew * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: he * prefer\_old: False * long\_pair: jpn-heb * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-heb\n\n\n* source group: Japanese\n* target group: Hebrew\n* OPUS readme: jpn-heb\n* model: transformer-align\n* source language(s): jpn\\_Hani jpn\\_Hira jpn\\_Kana\n* target language(s): heb\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.397", "### System Info:\n\n\n* hf\\_name: jpn-heb\n* source\\_languages: jpn\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'he']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: heb\n* short\\_pair: ja-he\n* chrF2\\_score: 0.397\n* bleu: 20.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 1598.0\n* src\\_name: Japanese\n* tgt\\_name: Hebrew\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: jpn-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-heb\n\n\n* source group: Japanese\n* target group: Hebrew\n* OPUS readme: jpn-heb\n* model: transformer-align\n* source language(s): jpn\\_Hani jpn\\_Hira jpn\\_Kana\n* target language(s): heb\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.397", "### System Info:\n\n\n* hf\\_name: jpn-heb\n* source\\_languages: jpn\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'he']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: heb\n* short\\_pair: ja-he\n* chrF2\\_score: 0.397\n* bleu: 20.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 1598.0\n* src\\_name: Japanese\n* tgt\\_name: Hebrew\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: jpn-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 153, 463 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #he #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-heb\n\n\n* source group: Japanese\n* target group: Hebrew\n* OPUS readme: jpn-heb\n* model: transformer-align\n* source language(s): jpn\\_Hani jpn\\_Hira jpn\\_Kana\n* target language(s): heb\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.397### System Info:\n\n\n* hf\\_name: jpn-heb\n* source\\_languages: jpn\n* target\\_languages: heb\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'he']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'heb'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: heb\n* short\\_pair: ja-he\n* chrF2\\_score: 0.397\n* bleu: 20.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 1598.0\n* src\\_name: Japanese\n* tgt\\_name: Hebrew\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: he\n* prefer\\_old: False\n* long\\_pair: jpn-heb\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### jpn-hun * source group: Japanese * target group: Hungarian * OPUS readme: [jpn-hun](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-hun/README.md) * model: transformer-align * source language(s): jpn_Bopo jpn_Hani jpn_Hira jpn_Kana jpn_Yiii * target language(s): hun * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hun/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hun/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hun/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.hun | 12.2 | 0.364 | ### System Info: - hf_name: jpn-hun - source_languages: jpn - target_languages: hun - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-hun/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'hu'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'hun'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hun/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hun/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: hun - short_pair: ja-hu - chrF2_score: 0.364 - bleu: 12.2 - brevity_penalty: 1.0 - ref_len: 18625.0 - src_name: Japanese - tgt_name: Hungarian - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: hu - prefer_old: False - long_pair: jpn-hun - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "hu"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-hu
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "hu" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-hun * source group: Japanese * target group: Hungarian * OPUS readme: jpn-hun * model: transformer-align * source language(s): jpn\_Bopo jpn\_Hani jpn\_Hira jpn\_Kana jpn\_Yiii * target language(s): hun * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 12.2, chr-F: 0.364 ### System Info: * hf\_name: jpn-hun * source\_languages: jpn * target\_languages: hun * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'hu'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'hun'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: hun * short\_pair: ja-hu * chrF2\_score: 0.364 * bleu: 12.2 * brevity\_penalty: 1.0 * ref\_len: 18625.0 * src\_name: Japanese * tgt\_name: Hungarian * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: hu * prefer\_old: False * long\_pair: jpn-hun * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-hun\n\n\n* source group: Japanese\n* target group: Hungarian\n* OPUS readme: jpn-hun\n* model: transformer-align\n* source language(s): jpn\\_Bopo jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Yiii\n* target language(s): hun\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.2, chr-F: 0.364", "### System Info:\n\n\n* hf\\_name: jpn-hun\n* source\\_languages: jpn\n* target\\_languages: hun\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'hu']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'hun'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: hun\n* short\\_pair: ja-hu\n* chrF2\\_score: 0.364\n* bleu: 12.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 18625.0\n* src\\_name: Japanese\n* tgt\\_name: Hungarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: hu\n* prefer\\_old: False\n* long\\_pair: jpn-hun\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-hun\n\n\n* source group: Japanese\n* target group: Hungarian\n* OPUS readme: jpn-hun\n* model: transformer-align\n* source language(s): jpn\\_Bopo jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Yiii\n* target language(s): hun\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.2, chr-F: 0.364", "### System Info:\n\n\n* hf\\_name: jpn-hun\n* source\\_languages: jpn\n* target\\_languages: hun\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'hu']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'hun'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: hun\n* short\\_pair: ja-hu\n* chrF2\\_score: 0.364\n* bleu: 12.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 18625.0\n* src\\_name: Japanese\n* tgt\\_name: Hungarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: hu\n* prefer\\_old: False\n* long\\_pair: jpn-hun\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 165, 463 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-hun\n\n\n* source group: Japanese\n* target group: Hungarian\n* OPUS readme: jpn-hun\n* model: transformer-align\n* source language(s): jpn\\_Bopo jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Yiii\n* target language(s): hun\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 12.2, chr-F: 0.364### System Info:\n\n\n* hf\\_name: jpn-hun\n* source\\_languages: jpn\n* target\\_languages: hun\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'hu']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'hun'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: hun\n* short\\_pair: ja-hu\n* chrF2\\_score: 0.364\n* bleu: 12.2\n* brevity\\_penalty: 1.0\n* ref\\_len: 18625.0\n* src\\_name: Japanese\n* tgt\\_name: Hungarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: hu\n* prefer\\_old: False\n* long\\_pair: jpn-hun\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### jpn-ita * source group: Japanese * target group: Italian * OPUS readme: [jpn-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-ita/README.md) * model: transformer-align * source language(s): jpn jpn_Hani jpn_Hira jpn_Kana jpn_Latn jpn_Yiii * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ita/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ita/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ita/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.ita | 22.8 | 0.460 | ### System Info: - hf_name: jpn-ita - source_languages: jpn - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'it'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ita/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-ita/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: ita - short_pair: ja-it - chrF2_score: 0.46 - bleu: 22.8 - brevity_penalty: 0.9540000000000001 - ref_len: 21500.0 - src_name: Japanese - tgt_name: Italian - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: it - prefer_old: False - long_pair: jpn-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "it"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-it
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "it" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-ita * source group: Japanese * target group: Italian * OPUS readme: jpn-ita * model: transformer-align * source language(s): jpn jpn\_Hani jpn\_Hira jpn\_Kana jpn\_Latn jpn\_Yiii * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.8, chr-F: 0.460 ### System Info: * hf\_name: jpn-ita * source\_languages: jpn * target\_languages: ita * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'it'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'ita'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: ita * short\_pair: ja-it * chrF2\_score: 0.46 * bleu: 22.8 * brevity\_penalty: 0.9540000000000001 * ref\_len: 21500.0 * src\_name: Japanese * tgt\_name: Italian * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: it * prefer\_old: False * long\_pair: jpn-ita * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-ita\n\n\n* source group: Japanese\n* target group: Italian\n* OPUS readme: jpn-ita\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.460", "### System Info:\n\n\n* hf\\_name: jpn-ita\n* source\\_languages: jpn\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'it']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: ita\n* short\\_pair: ja-it\n* chrF2\\_score: 0.46\n* bleu: 22.8\n* brevity\\_penalty: 0.9540000000000001\n* ref\\_len: 21500.0\n* src\\_name: Japanese\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: jpn-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-ita\n\n\n* source group: Japanese\n* target group: Italian\n* OPUS readme: jpn-ita\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.460", "### System Info:\n\n\n* hf\\_name: jpn-ita\n* source\\_languages: jpn\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'it']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: ita\n* short\\_pair: ja-it\n* chrF2\\_score: 0.46\n* bleu: 22.8\n* brevity\\_penalty: 0.9540000000000001\n* ref\\_len: 21500.0\n* src\\_name: Japanese\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: jpn-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 167, 469 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-ita\n\n\n* source group: Japanese\n* target group: Italian\n* OPUS readme: jpn-ita\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.460### System Info:\n\n\n* hf\\_name: jpn-ita\n* source\\_languages: jpn\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'it']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: ita\n* short\\_pair: ja-it\n* chrF2\\_score: 0.46\n* bleu: 22.8\n* brevity\\_penalty: 0.9540000000000001\n* ref\\_len: 21500.0\n* src\\_name: Japanese\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: jpn-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### jpn-msa * source group: Japanese * target group: Malay (macrolanguage) * OPUS readme: [jpn-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-msa/README.md) * model: transformer-align * source language(s): jpn jpn_Hani jpn_Hira jpn_Kana * target language(s): ind zlm_Latn zsm_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.msa | 21.5 | 0.469 | ### System Info: - hf_name: jpn-msa - source_languages: jpn - target_languages: msa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-msa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'ms'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-msa/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: msa - short_pair: ja-ms - chrF2_score: 0.469 - bleu: 21.5 - brevity_penalty: 0.9259999999999999 - ref_len: 17028.0 - src_name: Japanese - tgt_name: Malay (macrolanguage) - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: ms - prefer_old: False - long_pair: jpn-msa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "ms"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-ms
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "ms", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "ms" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-msa * source group: Japanese * target group: Malay (macrolanguage) * OPUS readme: jpn-msa * model: transformer-align * source language(s): jpn jpn\_Hani jpn\_Hira jpn\_Kana * target language(s): ind zlm\_Latn zsm\_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 21.5, chr-F: 0.469 ### System Info: * hf\_name: jpn-msa * source\_languages: jpn * target\_languages: msa * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'ms'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'zsm\_Latn', 'ind', 'max\_Latn', 'zlm\_Latn', 'min'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: msa * short\_pair: ja-ms * chrF2\_score: 0.469 * bleu: 21.5 * brevity\_penalty: 0.9259999999999999 * ref\_len: 17028.0 * src\_name: Japanese * tgt\_name: Malay (macrolanguage) * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: ms * prefer\_old: False * long\_pair: jpn-msa * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-msa\n\n\n* source group: Japanese\n* target group: Malay (macrolanguage)\n* OPUS readme: jpn-msa\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana\n* target language(s): ind zlm\\_Latn zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.5, chr-F: 0.469", "### System Info:\n\n\n* hf\\_name: jpn-msa\n* source\\_languages: jpn\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'ms']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: msa\n* short\\_pair: ja-ms\n* chrF2\\_score: 0.469\n* bleu: 21.5\n* brevity\\_penalty: 0.9259999999999999\n* ref\\_len: 17028.0\n* src\\_name: Japanese\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: jpn-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-msa\n\n\n* source group: Japanese\n* target group: Malay (macrolanguage)\n* OPUS readme: jpn-msa\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana\n* target language(s): ind zlm\\_Latn zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.5, chr-F: 0.469", "### System Info:\n\n\n* hf\\_name: jpn-msa\n* source\\_languages: jpn\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'ms']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: msa\n* short\\_pair: ja-ms\n* chrF2\\_score: 0.469\n* bleu: 21.5\n* brevity\\_penalty: 0.9259999999999999\n* ref\\_len: 17028.0\n* src\\_name: Japanese\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: jpn-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 201, 514 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #ms #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-msa\n\n\n* source group: Japanese\n* target group: Malay (macrolanguage)\n* OPUS readme: jpn-msa\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana\n* target language(s): ind zlm\\_Latn zsm\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.5, chr-F: 0.469### System Info:\n\n\n* hf\\_name: jpn-msa\n* source\\_languages: jpn\n* target\\_languages: msa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'ms']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'zsm\\_Latn', 'ind', 'max\\_Latn', 'zlm\\_Latn', 'min'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: msa\n* short\\_pair: ja-ms\n* chrF2\\_score: 0.469\n* bleu: 21.5\n* brevity\\_penalty: 0.9259999999999999\n* ref\\_len: 17028.0\n* src\\_name: Japanese\n* tgt\\_name: Malay (macrolanguage)\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: ms\n* prefer\\_old: False\n* long\\_pair: jpn-msa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### jpn-nld * source group: Japanese * target group: Dutch * OPUS readme: [jpn-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-nld/README.md) * model: transformer-align * source language(s): jpn jpn_Hani jpn_Hira jpn_Kana jpn_Latn * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-nld/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-nld/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-nld/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.nld | 34.7 | 0.534 | ### System Info: - hf_name: jpn-nld - source_languages: jpn - target_languages: nld - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-nld/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'nl'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'nld'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-nld/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-nld/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: nld - short_pair: ja-nl - chrF2_score: 0.534 - bleu: 34.7 - brevity_penalty: 0.938 - ref_len: 25849.0 - src_name: Japanese - tgt_name: Dutch - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: nl - prefer_old: False - long_pair: jpn-nld - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "nl"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-nl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "nl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "nl" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-nld * source group: Japanese * target group: Dutch * OPUS readme: jpn-nld * model: transformer-align * source language(s): jpn jpn\_Hani jpn\_Hira jpn\_Kana jpn\_Latn * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 34.7, chr-F: 0.534 ### System Info: * hf\_name: jpn-nld * source\_languages: jpn * target\_languages: nld * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'nl'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'nld'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: nld * short\_pair: ja-nl * chrF2\_score: 0.534 * bleu: 34.7 * brevity\_penalty: 0.938 * ref\_len: 25849.0 * src\_name: Japanese * tgt\_name: Dutch * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: nl * prefer\_old: False * long\_pair: jpn-nld * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-nld\n\n\n* source group: Japanese\n* target group: Dutch\n* OPUS readme: jpn-nld\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.7, chr-F: 0.534", "### System Info:\n\n\n* hf\\_name: jpn-nld\n* source\\_languages: jpn\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'nl']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: nld\n* short\\_pair: ja-nl\n* chrF2\\_score: 0.534\n* bleu: 34.7\n* brevity\\_penalty: 0.938\n* ref\\_len: 25849.0\n* src\\_name: Japanese\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: jpn-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-nld\n\n\n* source group: Japanese\n* target group: Dutch\n* OPUS readme: jpn-nld\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.7, chr-F: 0.534", "### System Info:\n\n\n* hf\\_name: jpn-nld\n* source\\_languages: jpn\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'nl']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: nld\n* short\\_pair: ja-nl\n* chrF2\\_score: 0.534\n* bleu: 34.7\n* brevity\\_penalty: 0.938\n* ref\\_len: 25849.0\n* src\\_name: Japanese\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: jpn-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 162, 464 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #nl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-nld\n\n\n* source group: Japanese\n* target group: Dutch\n* OPUS readme: jpn-nld\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn\n* target language(s): nld\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.7, chr-F: 0.534### System Info:\n\n\n* hf\\_name: jpn-nld\n* source\\_languages: jpn\n* target\\_languages: nld\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'nl']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'nld'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: nld\n* short\\_pair: ja-nl\n* chrF2\\_score: 0.534\n* bleu: 34.7\n* brevity\\_penalty: 0.938\n* ref\\_len: 25849.0\n* src\\_name: Japanese\n* tgt\\_name: Dutch\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: nl\n* prefer\\_old: False\n* long\\_pair: jpn-nld\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### jpn-pol * source group: Japanese * target group: Polish * OPUS readme: [jpn-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-pol/README.md) * model: transformer-align * source language(s): jpn jpn_Bopo jpn_Hani jpn_Hira jpn_Kana jpn_Latn * target language(s): pol * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.pol | 15.7 | 0.386 | ### System Info: - hf_name: jpn-pol - source_languages: jpn - target_languages: pol - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-pol/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'pl'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'pol'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-pol/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: pol - short_pair: ja-pl - chrF2_score: 0.386 - bleu: 15.7 - brevity_penalty: 1.0 - ref_len: 69904.0 - src_name: Japanese - tgt_name: Polish - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: pl - prefer_old: False - long_pair: jpn-pol - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "pl"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-pl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "pl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "pl" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-pol * source group: Japanese * target group: Polish * OPUS readme: jpn-pol * model: transformer-align * source language(s): jpn jpn\_Bopo jpn\_Hani jpn\_Hira jpn\_Kana jpn\_Latn * target language(s): pol * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 15.7, chr-F: 0.386 ### System Info: * hf\_name: jpn-pol * source\_languages: jpn * target\_languages: pol * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'pl'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'pol'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: pol * short\_pair: ja-pl * chrF2\_score: 0.386 * bleu: 15.7 * brevity\_penalty: 1.0 * ref\_len: 69904.0 * src\_name: Japanese * tgt\_name: Polish * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: pl * prefer\_old: False * long\_pair: jpn-pol * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-pol\n\n\n* source group: Japanese\n* target group: Polish\n* OPUS readme: jpn-pol\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 15.7, chr-F: 0.386", "### System Info:\n\n\n* hf\\_name: jpn-pol\n* source\\_languages: jpn\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'pl']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: pol\n* short\\_pair: ja-pl\n* chrF2\\_score: 0.386\n* bleu: 15.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 69904.0\n* src\\_name: Japanese\n* tgt\\_name: Polish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: jpn-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-pol\n\n\n* source group: Japanese\n* target group: Polish\n* OPUS readme: jpn-pol\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 15.7, chr-F: 0.386", "### System Info:\n\n\n* hf\\_name: jpn-pol\n* source\\_languages: jpn\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'pl']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: pol\n* short\\_pair: ja-pl\n* chrF2\\_score: 0.386\n* bleu: 15.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 69904.0\n* src\\_name: Japanese\n* tgt\\_name: Polish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: jpn-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 165, 459 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-pol\n\n\n* source group: Japanese\n* target group: Polish\n* OPUS readme: jpn-pol\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 15.7, chr-F: 0.386### System Info:\n\n\n* hf\\_name: jpn-pol\n* source\\_languages: jpn\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'pl']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: pol\n* short\\_pair: ja-pl\n* chrF2\\_score: 0.386\n* bleu: 15.7\n* brevity\\_penalty: 1.0\n* ref\\_len: 69904.0\n* src\\_name: Japanese\n* tgt\\_name: Polish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: jpn-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### jpn-por * source group: Japanese * target group: Portuguese * OPUS readme: [jpn-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-por/README.md) * model: transformer-align * source language(s): jpn jpn_Hani jpn_Hira jpn_Kana jpn_Latn jpn_Yiii * target language(s): por por_Hira * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-por/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-por/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-por/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.por | 22.2 | 0.444 | ### System Info: - hf_name: jpn-por - source_languages: jpn - target_languages: por - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-por/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'pt'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'por'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-por/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-por/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: por - short_pair: ja-pt - chrF2_score: 0.444 - bleu: 22.2 - brevity_penalty: 0.922 - ref_len: 15570.0 - src_name: Japanese - tgt_name: Portuguese - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: pt - prefer_old: False - long_pair: jpn-por - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "pt"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-pt
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "pt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "pt" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-por * source group: Japanese * target group: Portuguese * OPUS readme: jpn-por * model: transformer-align * source language(s): jpn jpn\_Hani jpn\_Hira jpn\_Kana jpn\_Latn jpn\_Yiii * target language(s): por por\_Hira * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.2, chr-F: 0.444 ### System Info: * hf\_name: jpn-por * source\_languages: jpn * target\_languages: por * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'pt'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'por'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: por * short\_pair: ja-pt * chrF2\_score: 0.444 * bleu: 22.2 * brevity\_penalty: 0.922 * ref\_len: 15570.0 * src\_name: Japanese * tgt\_name: Portuguese * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: pt * prefer\_old: False * long\_pair: jpn-por * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-por\n\n\n* source group: Japanese\n* target group: Portuguese\n* OPUS readme: jpn-por\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): por por\\_Hira\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.2, chr-F: 0.444", "### System Info:\n\n\n* hf\\_name: jpn-por\n* source\\_languages: jpn\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'pt']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: por\n* short\\_pair: ja-pt\n* chrF2\\_score: 0.444\n* bleu: 22.2\n* brevity\\_penalty: 0.922\n* ref\\_len: 15570.0\n* src\\_name: Japanese\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: jpn-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-por\n\n\n* source group: Japanese\n* target group: Portuguese\n* OPUS readme: jpn-por\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): por por\\_Hira\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.2, chr-F: 0.444", "### System Info:\n\n\n* hf\\_name: jpn-por\n* source\\_languages: jpn\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'pt']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: por\n* short\\_pair: ja-pt\n* chrF2\\_score: 0.444\n* bleu: 22.2\n* brevity\\_penalty: 0.922\n* ref\\_len: 15570.0\n* src\\_name: Japanese\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: jpn-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 197, 459 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #pt #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-por\n\n\n* source group: Japanese\n* target group: Portuguese\n* OPUS readme: jpn-por\n* model: transformer-align\n* source language(s): jpn jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): por por\\_Hira\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.2, chr-F: 0.444### System Info:\n\n\n* hf\\_name: jpn-por\n* source\\_languages: jpn\n* target\\_languages: por\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'pt']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'por'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: por\n* short\\_pair: ja-pt\n* chrF2\\_score: 0.444\n* bleu: 22.2\n* brevity\\_penalty: 0.922\n* ref\\_len: 15570.0\n* src\\_name: Japanese\n* tgt\\_name: Portuguese\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: pt\n* prefer\\_old: False\n* long\\_pair: jpn-por\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### jpn-rus * source group: Japanese * target group: Russian * OPUS readme: [jpn-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-rus/README.md) * model: transformer-align * source language(s): jpn jpn_Bopo jpn_Hani jpn_Hira jpn_Kana jpn_Latn jpn_Yiii * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-rus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.rus | 23.2 | 0.441 | ### System Info: - hf_name: jpn-rus - source_languages: jpn - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'ru'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-rus/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: rus - short_pair: ja-ru - chrF2_score: 0.441 - bleu: 23.2 - brevity_penalty: 0.9740000000000001 - ref_len: 70820.0 - src_name: Japanese - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: ru - prefer_old: False - long_pair: jpn-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "ru"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-ru
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "ru" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-rus * source group: Japanese * target group: Russian * OPUS readme: jpn-rus * model: transformer-align * source language(s): jpn jpn\_Bopo jpn\_Hani jpn\_Hira jpn\_Kana jpn\_Latn jpn\_Yiii * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.2, chr-F: 0.441 ### System Info: * hf\_name: jpn-rus * source\_languages: jpn * target\_languages: rus * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'ru'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'rus'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: rus * short\_pair: ja-ru * chrF2\_score: 0.441 * bleu: 23.2 * brevity\_penalty: 0.9740000000000001 * ref\_len: 70820.0 * src\_name: Japanese * tgt\_name: Russian * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: ru * prefer\_old: False * long\_pair: jpn-rus * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-rus\n\n\n* source group: Japanese\n* target group: Russian\n* OPUS readme: jpn-rus\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.441", "### System Info:\n\n\n* hf\\_name: jpn-rus\n* source\\_languages: jpn\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'ru']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: rus\n* short\\_pair: ja-ru\n* chrF2\\_score: 0.441\n* bleu: 23.2\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 70820.0\n* src\\_name: Japanese\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: jpn-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-rus\n\n\n* source group: Japanese\n* target group: Russian\n* OPUS readme: jpn-rus\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.441", "### System Info:\n\n\n* hf\\_name: jpn-rus\n* source\\_languages: jpn\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'ru']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: rus\n* short\\_pair: ja-ru\n* chrF2\\_score: 0.441\n* bleu: 23.2\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 70820.0\n* src\\_name: Japanese\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: jpn-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 170, 465 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-rus\n\n\n* source group: Japanese\n* target group: Russian\n* OPUS readme: jpn-rus\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.441### System Info:\n\n\n* hf\\_name: jpn-rus\n* source\\_languages: jpn\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'ru']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: rus\n* short\\_pair: ja-ru\n* chrF2\\_score: 0.441\n* bleu: 23.2\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 70820.0\n* src\\_name: Japanese\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: jpn-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### jpn-hbs * source group: Japanese * target group: Serbo-Croatian * OPUS readme: [jpn-hbs](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-hbs/README.md) * model: transformer-align * source language(s): jpn jpn_Bopo jpn_Hani jpn_Hira jpn_Kana jpn_Latn * target language(s): bos_Latn hrv srp_Cyrl srp_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hbs/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hbs/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hbs/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.hbs | 22.6 | 0.447 | ### System Info: - hf_name: jpn-hbs - source_languages: jpn - target_languages: hbs - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-hbs/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'sh'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'hrv', 'srp_Cyrl', 'bos_Latn', 'srp_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hbs/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-hbs/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: hbs - short_pair: ja-sh - chrF2_score: 0.447 - bleu: 22.6 - brevity_penalty: 0.9620000000000001 - ref_len: 2525.0 - src_name: Japanese - tgt_name: Serbo-Croatian - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: sh - prefer_old: False - long_pair: jpn-hbs - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "sh"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-sh
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "sh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "sh" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #sh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-hbs * source group: Japanese * target group: Serbo-Croatian * OPUS readme: jpn-hbs * model: transformer-align * source language(s): jpn jpn\_Bopo jpn\_Hani jpn\_Hira jpn\_Kana jpn\_Latn * target language(s): bos\_Latn hrv srp\_Cyrl srp\_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.6, chr-F: 0.447 ### System Info: * hf\_name: jpn-hbs * source\_languages: jpn * target\_languages: hbs * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'sh'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'hrv', 'srp\_Cyrl', 'bos\_Latn', 'srp\_Latn'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: hbs * short\_pair: ja-sh * chrF2\_score: 0.447 * bleu: 22.6 * brevity\_penalty: 0.9620000000000001 * ref\_len: 2525.0 * src\_name: Japanese * tgt\_name: Serbo-Croatian * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: sh * prefer\_old: False * long\_pair: jpn-hbs * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-hbs\n\n\n* source group: Japanese\n* target group: Serbo-Croatian\n* OPUS readme: jpn-hbs\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn\n* target language(s): bos\\_Latn hrv srp\\_Cyrl srp\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.447", "### System Info:\n\n\n* hf\\_name: jpn-hbs\n* source\\_languages: jpn\n* target\\_languages: hbs\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'sh']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: hbs\n* short\\_pair: ja-sh\n* chrF2\\_score: 0.447\n* bleu: 22.6\n* brevity\\_penalty: 0.9620000000000001\n* ref\\_len: 2525.0\n* src\\_name: Japanese\n* tgt\\_name: Serbo-Croatian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: sh\n* prefer\\_old: False\n* long\\_pair: jpn-hbs\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #sh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-hbs\n\n\n* source group: Japanese\n* target group: Serbo-Croatian\n* OPUS readme: jpn-hbs\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn\n* target language(s): bos\\_Latn hrv srp\\_Cyrl srp\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.447", "### System Info:\n\n\n* hf\\_name: jpn-hbs\n* source\\_languages: jpn\n* target\\_languages: hbs\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'sh']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: hbs\n* short\\_pair: ja-sh\n* chrF2\\_score: 0.447\n* bleu: 22.6\n* brevity\\_penalty: 0.9620000000000001\n* ref\\_len: 2525.0\n* src\\_name: Japanese\n* tgt\\_name: Serbo-Croatian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: sh\n* prefer\\_old: False\n* long\\_pair: jpn-hbs\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 218, 502 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #sh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-hbs\n\n\n* source group: Japanese\n* target group: Serbo-Croatian\n* OPUS readme: jpn-hbs\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn\n* target language(s): bos\\_Latn hrv srp\\_Cyrl srp\\_Latn\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* a sentence initial language token is required in the form of '>>id<<' (id = valid target language ID)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.447### System Info:\n\n\n* hf\\_name: jpn-hbs\n* source\\_languages: jpn\n* target\\_languages: hbs\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'sh']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'hrv', 'srp\\_Cyrl', 'bos\\_Latn', 'srp\\_Latn'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: hbs\n* short\\_pair: ja-sh\n* chrF2\\_score: 0.447\n* bleu: 22.6\n* brevity\\_penalty: 0.9620000000000001\n* ref\\_len: 2525.0\n* src\\_name: Japanese\n* tgt\\_name: Serbo-Croatian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: sh\n* prefer\\_old: False\n* long\\_pair: jpn-hbs\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-ja-sv * source languages: ja * target languages: sv * OPUS readme: [ja-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ja-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ja-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ja-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ja.sv | 26.1 | 0.445 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ja-sv * source languages: ja * target languages: sv * OPUS readme: ja-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.1, chr-F: 0.445
[ "### opus-mt-ja-sv\n\n\n* source languages: ja\n* target languages: sv\n* OPUS readme: ja-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.445" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ja-sv\n\n\n* source languages: ja\n* target languages: sv\n* OPUS readme: ja-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.445" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ja-sv\n\n\n* source languages: ja\n* target languages: sv\n* OPUS readme: ja-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.1, chr-F: 0.445" ]
translation
transformers
### jpn-tur * source group: Japanese * target group: Turkish * OPUS readme: [jpn-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-tur/README.md) * model: transformer-align * source language(s): jpn jpn_Bopo jpn_Hang jpn_Hani jpn_Hira jpn_Kana jpn_Yiii * target language(s): tur * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.tur | 16.7 | 0.434 | ### System Info: - hf_name: jpn-tur - source_languages: jpn - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'tr'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: tur - short_pair: ja-tr - chrF2_score: 0.434 - bleu: 16.7 - brevity_penalty: 0.932 - ref_len: 4755.0 - src_name: Japanese - tgt_name: Turkish - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: tr - prefer_old: False - long_pair: jpn-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "tr"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-tr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "tr" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-tur * source group: Japanese * target group: Turkish * OPUS readme: jpn-tur * model: transformer-align * source language(s): jpn jpn\_Bopo jpn\_Hang jpn\_Hani jpn\_Hira jpn\_Kana jpn\_Yiii * target language(s): tur * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 16.7, chr-F: 0.434 ### System Info: * hf\_name: jpn-tur * source\_languages: jpn * target\_languages: tur * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'tr'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'tur'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: tur * short\_pair: ja-tr * chrF2\_score: 0.434 * bleu: 16.7 * brevity\_penalty: 0.932 * ref\_len: 4755.0 * src\_name: Japanese * tgt\_name: Turkish * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: tr * prefer\_old: False * long\_pair: jpn-tur * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-tur\n\n\n* source group: Japanese\n* target group: Turkish\n* OPUS readme: jpn-tur\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hang jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Yiii\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.7, chr-F: 0.434", "### System Info:\n\n\n* hf\\_name: jpn-tur\n* source\\_languages: jpn\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'tr']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: tur\n* short\\_pair: ja-tr\n* chrF2\\_score: 0.434\n* bleu: 16.7\n* brevity\\_penalty: 0.932\n* ref\\_len: 4755.0\n* src\\_name: Japanese\n* tgt\\_name: Turkish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: jpn-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-tur\n\n\n* source group: Japanese\n* target group: Turkish\n* OPUS readme: jpn-tur\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hang jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Yiii\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.7, chr-F: 0.434", "### System Info:\n\n\n* hf\\_name: jpn-tur\n* source\\_languages: jpn\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'tr']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: tur\n* short\\_pair: ja-tr\n* chrF2\\_score: 0.434\n* bleu: 16.7\n* brevity\\_penalty: 0.932\n* ref\\_len: 4755.0\n* src\\_name: Japanese\n* tgt\\_name: Turkish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: jpn-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 172, 464 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-tur\n\n\n* source group: Japanese\n* target group: Turkish\n* OPUS readme: jpn-tur\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hang jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Yiii\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 16.7, chr-F: 0.434### System Info:\n\n\n* hf\\_name: jpn-tur\n* source\\_languages: jpn\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'tr']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: tur\n* short\\_pair: ja-tr\n* chrF2\\_score: 0.434\n* bleu: 16.7\n* brevity\\_penalty: 0.932\n* ref\\_len: 4755.0\n* src\\_name: Japanese\n* tgt\\_name: Turkish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: jpn-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### jpn-vie * source group: Japanese * target group: Vietnamese * OPUS readme: [jpn-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-vie/README.md) * model: transformer-align * source language(s): jpn jpn_Bopo jpn_Hang jpn_Hani jpn_Hira jpn_Kana jpn_Latn jpn_Yiii * target language(s): vie * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-vie/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-vie/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-vie/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.vie | 20.3 | 0.380 | ### System Info: - hf_name: jpn-vie - source_languages: jpn - target_languages: vie - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-vie/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'vi'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'vie', 'vie_Hani'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-vie/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-vie/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: vie - short_pair: ja-vi - chrF2_score: 0.38 - bleu: 20.3 - brevity_penalty: 0.909 - ref_len: 10779.0 - src_name: Japanese - tgt_name: Vietnamese - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: vi - prefer_old: False - long_pair: jpn-vie - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ja", "vi"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ja-vi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ja", "vi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ja", "vi" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ja #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### jpn-vie * source group: Japanese * target group: Vietnamese * OPUS readme: jpn-vie * model: transformer-align * source language(s): jpn jpn\_Bopo jpn\_Hang jpn\_Hani jpn\_Hira jpn\_Kana jpn\_Latn jpn\_Yiii * target language(s): vie * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.3, chr-F: 0.380 ### System Info: * hf\_name: jpn-vie * source\_languages: jpn * target\_languages: vie * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ja', 'vi'] * src\_constituents: {'jpn\_Hang', 'jpn', 'jpn\_Yiii', 'jpn\_Kana', 'jpn\_Hani', 'jpn\_Bopo', 'jpn\_Latn', 'jpn\_Hira'} * tgt\_constituents: {'vie', 'vie\_Hani'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: jpn * tgt\_alpha3: vie * short\_pair: ja-vi * chrF2\_score: 0.38 * bleu: 20.3 * brevity\_penalty: 0.909 * ref\_len: 10779.0 * src\_name: Japanese * tgt\_name: Vietnamese * train\_date: 2020-06-17 * src\_alpha2: ja * tgt\_alpha2: vi * prefer\_old: False * long\_pair: jpn-vie * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### jpn-vie\n\n\n* source group: Japanese\n* target group: Vietnamese\n* OPUS readme: jpn-vie\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hang jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.3, chr-F: 0.380", "### System Info:\n\n\n* hf\\_name: jpn-vie\n* source\\_languages: jpn\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'vi']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: vie\n* short\\_pair: ja-vi\n* chrF2\\_score: 0.38\n* bleu: 20.3\n* brevity\\_penalty: 0.909\n* ref\\_len: 10779.0\n* src\\_name: Japanese\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: jpn-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### jpn-vie\n\n\n* source group: Japanese\n* target group: Vietnamese\n* OPUS readme: jpn-vie\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hang jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.3, chr-F: 0.380", "### System Info:\n\n\n* hf\\_name: jpn-vie\n* source\\_languages: jpn\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'vi']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: vie\n* short\\_pair: ja-vi\n* chrF2\\_score: 0.38\n* bleu: 20.3\n* brevity\\_penalty: 0.909\n* ref\\_len: 10779.0\n* src\\_name: Japanese\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: jpn-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 175, 467 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ja #vi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### jpn-vie\n\n\n* source group: Japanese\n* target group: Vietnamese\n* OPUS readme: jpn-vie\n* model: transformer-align\n* source language(s): jpn jpn\\_Bopo jpn\\_Hang jpn\\_Hani jpn\\_Hira jpn\\_Kana jpn\\_Latn jpn\\_Yiii\n* target language(s): vie\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.3, chr-F: 0.380### System Info:\n\n\n* hf\\_name: jpn-vie\n* source\\_languages: jpn\n* target\\_languages: vie\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ja', 'vi']\n* src\\_constituents: {'jpn\\_Hang', 'jpn', 'jpn\\_Yiii', 'jpn\\_Kana', 'jpn\\_Hani', 'jpn\\_Bopo', 'jpn\\_Latn', 'jpn\\_Hira'}\n* tgt\\_constituents: {'vie', 'vie\\_Hani'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: jpn\n* tgt\\_alpha3: vie\n* short\\_pair: ja-vi\n* chrF2\\_score: 0.38\n* bleu: 20.3\n* brevity\\_penalty: 0.909\n* ref\\_len: 10779.0\n* src\\_name: Japanese\n* tgt\\_name: Vietnamese\n* train\\_date: 2020-06-17\n* src\\_alpha2: ja\n* tgt\\_alpha2: vi\n* prefer\\_old: False\n* long\\_pair: jpn-vie\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-jap-en * source languages: jap * target languages: en * OPUS readme: [jap-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/jap-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/jap-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/jap-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/jap-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | bible-uedin.jap.en | 52.6 | 0.703 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-jap-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "jap", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #jap #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-jap-en * source languages: jap * target languages: en * OPUS readme: jap-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 52.6, chr-F: 0.703
[ "### opus-mt-jap-en\n\n\n* source languages: jap\n* target languages: en\n* OPUS readme: jap-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.6, chr-F: 0.703" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #jap #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-jap-en\n\n\n* source languages: jap\n* target languages: en\n* OPUS readme: jap-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.6, chr-F: 0.703" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #jap #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-jap-en\n\n\n* source languages: jap\n* target languages: en\n* OPUS readme: jap-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 52.6, chr-F: 0.703" ]
translation
transformers
### kat-eng * source group: Georgian * target group: English * OPUS readme: [kat-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kat-eng/README.md) * model: transformer-align * source language(s): kat * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kat-eng/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kat-eng/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kat-eng/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kat.eng | 37.9 | 0.538 | ### System Info: - hf_name: kat-eng - source_languages: kat - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kat-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ka', 'en'] - src_constituents: {'kat'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kat-eng/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kat-eng/opus-2020-06-16.test.txt - src_alpha3: kat - tgt_alpha3: eng - short_pair: ka-en - chrF2_score: 0.5379999999999999 - bleu: 37.9 - brevity_penalty: 0.991 - ref_len: 5992.0 - src_name: Georgian - tgt_name: English - train_date: 2020-06-16 - src_alpha2: ka - tgt_alpha2: en - prefer_old: False - long_pair: kat-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ka", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ka-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ka", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ka", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ka #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### kat-eng * source group: Georgian * target group: English * OPUS readme: kat-eng * model: transformer-align * source language(s): kat * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 37.9, chr-F: 0.538 ### System Info: * hf\_name: kat-eng * source\_languages: kat * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ka', 'en'] * src\_constituents: {'kat'} * tgt\_constituents: {'eng'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm12k,spm12k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: kat * tgt\_alpha3: eng * short\_pair: ka-en * chrF2\_score: 0.5379999999999999 * bleu: 37.9 * brevity\_penalty: 0.991 * ref\_len: 5992.0 * src\_name: Georgian * tgt\_name: English * train\_date: 2020-06-16 * src\_alpha2: ka * tgt\_alpha2: en * prefer\_old: False * long\_pair: kat-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### kat-eng\n\n\n* source group: Georgian\n* target group: English\n* OPUS readme: kat-eng\n* model: transformer-align\n* source language(s): kat\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.9, chr-F: 0.538", "### System Info:\n\n\n* hf\\_name: kat-eng\n* source\\_languages: kat\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ka', 'en']\n* src\\_constituents: {'kat'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kat\n* tgt\\_alpha3: eng\n* short\\_pair: ka-en\n* chrF2\\_score: 0.5379999999999999\n* bleu: 37.9\n* brevity\\_penalty: 0.991\n* ref\\_len: 5992.0\n* src\\_name: Georgian\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: ka\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: kat-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ka #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### kat-eng\n\n\n* source group: Georgian\n* target group: English\n* OPUS readme: kat-eng\n* model: transformer-align\n* source language(s): kat\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.9, chr-F: 0.538", "### System Info:\n\n\n* hf\\_name: kat-eng\n* source\\_languages: kat\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ka', 'en']\n* src\\_constituents: {'kat'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kat\n* tgt\\_alpha3: eng\n* short\\_pair: ka-en\n* chrF2\\_score: 0.5379999999999999\n* bleu: 37.9\n* brevity\\_penalty: 0.991\n* ref\\_len: 5992.0\n* src\\_name: Georgian\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: ka\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: kat-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 131, 405 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ka #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### kat-eng\n\n\n* source group: Georgian\n* target group: English\n* OPUS readme: kat-eng\n* model: transformer-align\n* source language(s): kat\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.9, chr-F: 0.538### System Info:\n\n\n* hf\\_name: kat-eng\n* source\\_languages: kat\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ka', 'en']\n* src\\_constituents: {'kat'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kat\n* tgt\\_alpha3: eng\n* short\\_pair: ka-en\n* chrF2\\_score: 0.5379999999999999\n* bleu: 37.9\n* brevity\\_penalty: 0.991\n* ref\\_len: 5992.0\n* src\\_name: Georgian\n* tgt\\_name: English\n* train\\_date: 2020-06-16\n* src\\_alpha2: ka\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: kat-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### kat-rus * source group: Georgian * target group: Russian * OPUS readme: [kat-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kat-rus/README.md) * model: transformer-align * source language(s): kat * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kat-rus/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kat-rus/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kat-rus/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kat.rus | 38.2 | 0.604 | ### System Info: - hf_name: kat-rus - source_languages: kat - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kat-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ka', 'ru'] - src_constituents: {'kat'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kat-rus/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kat-rus/opus-2020-06-16.test.txt - src_alpha3: kat - tgt_alpha3: rus - short_pair: ka-ru - chrF2_score: 0.604 - bleu: 38.2 - brevity_penalty: 0.996 - ref_len: 3899.0 - src_name: Georgian - tgt_name: Russian - train_date: 2020-06-16 - src_alpha2: ka - tgt_alpha2: ru - prefer_old: False - long_pair: kat-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ka", "ru"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ka-ru
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ka", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ka", "ru" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ka #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### kat-rus * source group: Georgian * target group: Russian * OPUS readme: kat-rus * model: transformer-align * source language(s): kat * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 38.2, chr-F: 0.604 ### System Info: * hf\_name: kat-rus * source\_languages: kat * target\_languages: rus * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ka', 'ru'] * src\_constituents: {'kat'} * tgt\_constituents: {'rus'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm12k,spm12k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: kat * tgt\_alpha3: rus * short\_pair: ka-ru * chrF2\_score: 0.604 * bleu: 38.2 * brevity\_penalty: 0.996 * ref\_len: 3899.0 * src\_name: Georgian * tgt\_name: Russian * train\_date: 2020-06-16 * src\_alpha2: ka * tgt\_alpha2: ru * prefer\_old: False * long\_pair: kat-rus * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### kat-rus\n\n\n* source group: Georgian\n* target group: Russian\n* OPUS readme: kat-rus\n* model: transformer-align\n* source language(s): kat\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.604", "### System Info:\n\n\n* hf\\_name: kat-rus\n* source\\_languages: kat\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ka', 'ru']\n* src\\_constituents: {'kat'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kat\n* tgt\\_alpha3: rus\n* short\\_pair: ka-ru\n* chrF2\\_score: 0.604\n* bleu: 38.2\n* brevity\\_penalty: 0.996\n* ref\\_len: 3899.0\n* src\\_name: Georgian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-16\n* src\\_alpha2: ka\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: kat-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ka #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### kat-rus\n\n\n* source group: Georgian\n* target group: Russian\n* OPUS readme: kat-rus\n* model: transformer-align\n* source language(s): kat\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.604", "### System Info:\n\n\n* hf\\_name: kat-rus\n* source\\_languages: kat\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ka', 'ru']\n* src\\_constituents: {'kat'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kat\n* tgt\\_alpha3: rus\n* short\\_pair: ka-ru\n* chrF2\\_score: 0.604\n* bleu: 38.2\n* brevity\\_penalty: 0.996\n* ref\\_len: 3899.0\n* src\\_name: Georgian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-16\n* src\\_alpha2: ka\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: kat-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 131, 392 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ka #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### kat-rus\n\n\n* source group: Georgian\n* target group: Russian\n* OPUS readme: kat-rus\n* model: transformer-align\n* source language(s): kat\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm12k,spm12k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 38.2, chr-F: 0.604### System Info:\n\n\n* hf\\_name: kat-rus\n* source\\_languages: kat\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ka', 'ru']\n* src\\_constituents: {'kat'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm12k,spm12k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kat\n* tgt\\_alpha3: rus\n* short\\_pair: ka-ru\n* chrF2\\_score: 0.604\n* bleu: 38.2\n* brevity\\_penalty: 0.996\n* ref\\_len: 3899.0\n* src\\_name: Georgian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-16\n* src\\_alpha2: ka\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: kat-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-kab-en * source languages: kab * target languages: en * OPUS readme: [kab-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kab-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/kab-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kab-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kab-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.kab.en | 27.5 | 0.408 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kab-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kab", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kab #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kab-en * source languages: kab * target languages: en * OPUS readme: kab-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.5, chr-F: 0.408
[ "### opus-mt-kab-en\n\n\n* source languages: kab\n* target languages: en\n* OPUS readme: kab-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.408" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kab #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kab-en\n\n\n* source languages: kab\n* target languages: en\n* OPUS readme: kab-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.408" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kab #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kab-en\n\n\n* source languages: kab\n* target languages: en\n* OPUS readme: kab-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.408" ]
translation
transformers
### opus-mt-kg-en * source languages: kg * target languages: en * OPUS readme: [kg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kg-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kg-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kg.en | 35.4 | 0.508 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kg-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kg", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kg-en * source languages: kg * target languages: en * OPUS readme: kg-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 35.4, chr-F: 0.508
[ "### opus-mt-kg-en\n\n\n* source languages: kg\n* target languages: en\n* OPUS readme: kg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.4, chr-F: 0.508" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kg-en\n\n\n* source languages: kg\n* target languages: en\n* OPUS readme: kg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.4, chr-F: 0.508" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kg-en\n\n\n* source languages: kg\n* target languages: en\n* OPUS readme: kg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.4, chr-F: 0.508" ]
translation
transformers
### opus-mt-kg-es * source languages: kg * target languages: es * OPUS readme: [kg-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kg-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/kg-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kg.es | 22.4 | 0.402 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kg-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kg", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kg-es * source languages: kg * target languages: es * OPUS readme: kg-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.4, chr-F: 0.402
[ "### opus-mt-kg-es\n\n\n* source languages: kg\n* target languages: es\n* OPUS readme: kg-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.402" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kg-es\n\n\n* source languages: kg\n* target languages: es\n* OPUS readme: kg-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.402" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kg-es\n\n\n* source languages: kg\n* target languages: es\n* OPUS readme: kg-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.402" ]
translation
transformers
### opus-mt-kg-fr * source languages: kg * target languages: fr * OPUS readme: [kg-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kg-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kg-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kg.fr | 26.0 | 0.433 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kg-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kg", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kg #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kg-fr * source languages: kg * target languages: fr * OPUS readme: kg-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.0, chr-F: 0.433
[ "### opus-mt-kg-fr\n\n\n* source languages: kg\n* target languages: fr\n* OPUS readme: kg-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.0, chr-F: 0.433" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kg #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kg-fr\n\n\n* source languages: kg\n* target languages: fr\n* OPUS readme: kg-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.0, chr-F: 0.433" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kg #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kg-fr\n\n\n* source languages: kg\n* target languages: fr\n* OPUS readme: kg-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.0, chr-F: 0.433" ]
translation
transformers
### opus-mt-kg-sv * source languages: kg * target languages: sv * OPUS readme: [kg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kg-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kg-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kg-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kg.sv | 26.3 | 0.440 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kg-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kg", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kg #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kg-sv * source languages: kg * target languages: sv * OPUS readme: kg-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.3, chr-F: 0.440
[ "### opus-mt-kg-sv\n\n\n* source languages: kg\n* target languages: sv\n* OPUS readme: kg-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.440" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kg #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kg-sv\n\n\n* source languages: kg\n* target languages: sv\n* OPUS readme: kg-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.440" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kg #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kg-sv\n\n\n* source languages: kg\n* target languages: sv\n* OPUS readme: kg-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.3, chr-F: 0.440" ]
translation
transformers
### opus-mt-kj-en * source languages: kj * target languages: en * OPUS readme: [kj-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kj-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/kj-en/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kj-en/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kj-en/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kj.en | 30.3 | 0.477 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kj-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kj", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kj #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kj-en * source languages: kj * target languages: en * OPUS readme: kj-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 30.3, chr-F: 0.477
[ "### opus-mt-kj-en\n\n\n* source languages: kj\n* target languages: en\n* OPUS readme: kj-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.3, chr-F: 0.477" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kj #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kj-en\n\n\n* source languages: kj\n* target languages: en\n* OPUS readme: kj-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.3, chr-F: 0.477" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kj #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kj-en\n\n\n* source languages: kj\n* target languages: en\n* OPUS readme: kj-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.3, chr-F: 0.477" ]
translation
transformers
### opus-mt-kl-en * source languages: kl * target languages: en * OPUS readme: [kl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kl-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kl-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kl-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kl-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kl.en | 26.4 | 0.432 | | Tatoeba.kl.en | 35.5 | 0.443 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kl-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kl-en * source languages: kl * target languages: en * OPUS readme: kl-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.4, chr-F: 0.432 testset: URL, BLEU: 35.5, chr-F: 0.443
[ "### opus-mt-kl-en\n\n\n* source languages: kl\n* target languages: en\n* OPUS readme: kl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.432\ntestset: URL, BLEU: 35.5, chr-F: 0.443" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kl-en\n\n\n* source languages: kl\n* target languages: en\n* OPUS readme: kl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.432\ntestset: URL, BLEU: 35.5, chr-F: 0.443" ]
[ 52, 132 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kl #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kl-en\n\n\n* source languages: kl\n* target languages: en\n* OPUS readme: kl-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.432\ntestset: URL, BLEU: 35.5, chr-F: 0.443" ]
translation
transformers
### opus-mt-ko-de * source languages: ko * target languages: de * OPUS readme: [ko-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ko-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ko-de/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ko-de/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ko-de/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ko.de | 30.2 | 0.523 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ko-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ko", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ko #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ko-de * source languages: ko * target languages: de * OPUS readme: ko-de * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 30.2, chr-F: 0.523
[ "### opus-mt-ko-de\n\n\n* source languages: ko\n* target languages: de\n* OPUS readme: ko-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.523" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ko-de\n\n\n* source languages: ko\n* target languages: de\n* OPUS readme: ko-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.523" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ko-de\n\n\n* source languages: ko\n* target languages: de\n* OPUS readme: ko-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.2, chr-F: 0.523" ]
translation
transformers
### kor-eng * source group: Korean * target group: English * OPUS readme: [kor-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-eng/README.md) * model: transformer-align * source language(s): kor kor_Hang kor_Latn * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kor.eng | 41.3 | 0.588 | ### System Info: - hf_name: kor-eng - source_languages: kor - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ko', 'en'] - src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-eng/opus-2020-06-17.test.txt - src_alpha3: kor - tgt_alpha3: eng - short_pair: ko-en - chrF2_score: 0.588 - bleu: 41.3 - brevity_penalty: 0.9590000000000001 - ref_len: 17711.0 - src_name: Korean - tgt_name: English - train_date: 2020-06-17 - src_alpha2: ko - tgt_alpha2: en - prefer_old: False - long_pair: kor-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ko", "en"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ko-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ko", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ko", "en" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### kor-eng * source group: Korean * target group: English * OPUS readme: kor-eng * model: transformer-align * source language(s): kor kor\_Hang kor\_Latn * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 41.3, chr-F: 0.588 ### System Info: * hf\_name: kor-eng * source\_languages: kor * target\_languages: eng * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ko', 'en'] * src\_constituents: {'kor\_Hani', 'kor\_Hang', 'kor\_Latn', 'kor'} * tgt\_constituents: {'eng'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: kor * tgt\_alpha3: eng * short\_pair: ko-en * chrF2\_score: 0.588 * bleu: 41.3 * brevity\_penalty: 0.9590000000000001 * ref\_len: 17711.0 * src\_name: Korean * tgt\_name: English * train\_date: 2020-06-17 * src\_alpha2: ko * tgt\_alpha2: en * prefer\_old: False * long\_pair: kor-eng * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### kor-eng\n\n\n* source group: Korean\n* target group: English\n* OPUS readme: kor-eng\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Latn\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.3, chr-F: 0.588", "### System Info:\n\n\n* hf\\_name: kor-eng\n* source\\_languages: kor\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'en']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: eng\n* short\\_pair: ko-en\n* chrF2\\_score: 0.588\n* bleu: 41.3\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 17711.0\n* src\\_name: Korean\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: kor-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### kor-eng\n\n\n* source group: Korean\n* target group: English\n* OPUS readme: kor-eng\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Latn\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.3, chr-F: 0.588", "### System Info:\n\n\n* hf\\_name: kor-eng\n* source\\_languages: kor\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'en']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: eng\n* short\\_pair: ko-en\n* chrF2\\_score: 0.588\n* bleu: 41.3\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 17711.0\n* src\\_name: Korean\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: kor-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 146, 429 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### kor-eng\n\n\n* source group: Korean\n* target group: English\n* OPUS readme: kor-eng\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Latn\n* target language(s): eng\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 41.3, chr-F: 0.588### System Info:\n\n\n* hf\\_name: kor-eng\n* source\\_languages: kor\n* target\\_languages: eng\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'en']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'eng'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: eng\n* short\\_pair: ko-en\n* chrF2\\_score: 0.588\n* bleu: 41.3\n* brevity\\_penalty: 0.9590000000000001\n* ref\\_len: 17711.0\n* src\\_name: Korean\n* tgt\\_name: English\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: en\n* prefer\\_old: False\n* long\\_pair: kor-eng\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### kor-spa * source group: Korean * target group: Spanish * OPUS readme: [kor-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-spa/README.md) * model: transformer-align * source language(s): kor kor_Hang kor_Latn * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kor.spa | 31.3 | 0.521 | ### System Info: - hf_name: kor-spa - source_languages: kor - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ko', 'es'] - src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-spa/opus-2020-06-17.test.txt - src_alpha3: kor - tgt_alpha3: spa - short_pair: ko-es - chrF2_score: 0.521 - bleu: 31.3 - brevity_penalty: 0.95 - ref_len: 6805.0 - src_name: Korean - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: ko - tgt_alpha2: es - prefer_old: False - long_pair: kor-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ko", "es"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ko-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ko", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ko", "es" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ko #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### kor-spa * source group: Korean * target group: Spanish * OPUS readme: kor-spa * model: transformer-align * source language(s): kor kor\_Hang kor\_Latn * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 31.3, chr-F: 0.521 ### System Info: * hf\_name: kor-spa * source\_languages: kor * target\_languages: spa * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ko', 'es'] * src\_constituents: {'kor\_Hani', 'kor\_Hang', 'kor\_Latn', 'kor'} * tgt\_constituents: {'spa'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: kor * tgt\_alpha3: spa * short\_pair: ko-es * chrF2\_score: 0.521 * bleu: 31.3 * brevity\_penalty: 0.95 * ref\_len: 6805.0 * src\_name: Korean * tgt\_name: Spanish * train\_date: 2020-06-17 * src\_alpha2: ko * tgt\_alpha2: es * prefer\_old: False * long\_pair: kor-spa * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### kor-spa\n\n\n* source group: Korean\n* target group: Spanish\n* OPUS readme: kor-spa\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Latn\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.3, chr-F: 0.521", "### System Info:\n\n\n* hf\\_name: kor-spa\n* source\\_languages: kor\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'es']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: spa\n* short\\_pair: ko-es\n* chrF2\\_score: 0.521\n* bleu: 31.3\n* brevity\\_penalty: 0.95\n* ref\\_len: 6805.0\n* src\\_name: Korean\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: kor-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### kor-spa\n\n\n* source group: Korean\n* target group: Spanish\n* OPUS readme: kor-spa\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Latn\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.3, chr-F: 0.521", "### System Info:\n\n\n* hf\\_name: kor-spa\n* source\\_languages: kor\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'es']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: spa\n* short\\_pair: ko-es\n* chrF2\\_score: 0.521\n* bleu: 31.3\n* brevity\\_penalty: 0.95\n* ref\\_len: 6805.0\n* src\\_name: Korean\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: kor-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 145, 421 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### kor-spa\n\n\n* source group: Korean\n* target group: Spanish\n* OPUS readme: kor-spa\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Latn\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.3, chr-F: 0.521### System Info:\n\n\n* hf\\_name: kor-spa\n* source\\_languages: kor\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'es']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: spa\n* short\\_pair: ko-es\n* chrF2\\_score: 0.521\n* bleu: 31.3\n* brevity\\_penalty: 0.95\n* ref\\_len: 6805.0\n* src\\_name: Korean\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: kor-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### kor-fin * source group: Korean * target group: Finnish * OPUS readme: [kor-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-fin/README.md) * model: transformer-align * source language(s): kor kor_Hang kor_Latn * target language(s): fin * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kor.fin | 26.6 | 0.502 | ### System Info: - hf_name: kor-fin - source_languages: kor - target_languages: fin - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-fin/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ko', 'fi'] - src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'} - tgt_constituents: {'fin'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fin/opus-2020-06-17.test.txt - src_alpha3: kor - tgt_alpha3: fin - short_pair: ko-fi - chrF2_score: 0.502 - bleu: 26.6 - brevity_penalty: 0.892 - ref_len: 2251.0 - src_name: Korean - tgt_name: Finnish - train_date: 2020-06-17 - src_alpha2: ko - tgt_alpha2: fi - prefer_old: False - long_pair: kor-fin - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ko", "fi"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ko-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ko", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ko", "fi" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ko #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### kor-fin * source group: Korean * target group: Finnish * OPUS readme: kor-fin * model: transformer-align * source language(s): kor kor\_Hang kor\_Latn * target language(s): fin * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.6, chr-F: 0.502 ### System Info: * hf\_name: kor-fin * source\_languages: kor * target\_languages: fin * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ko', 'fi'] * src\_constituents: {'kor\_Hani', 'kor\_Hang', 'kor\_Latn', 'kor'} * tgt\_constituents: {'fin'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: kor * tgt\_alpha3: fin * short\_pair: ko-fi * chrF2\_score: 0.502 * bleu: 26.6 * brevity\_penalty: 0.892 * ref\_len: 2251.0 * src\_name: Korean * tgt\_name: Finnish * train\_date: 2020-06-17 * src\_alpha2: ko * tgt\_alpha2: fi * prefer\_old: False * long\_pair: kor-fin * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### kor-fin\n\n\n* source group: Korean\n* target group: Finnish\n* OPUS readme: kor-fin\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Latn\n* target language(s): fin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.6, chr-F: 0.502", "### System Info:\n\n\n* hf\\_name: kor-fin\n* source\\_languages: kor\n* target\\_languages: fin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'fi']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'fin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: fin\n* short\\_pair: ko-fi\n* chrF2\\_score: 0.502\n* bleu: 26.6\n* brevity\\_penalty: 0.892\n* ref\\_len: 2251.0\n* src\\_name: Korean\n* tgt\\_name: Finnish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: fi\n* prefer\\_old: False\n* long\\_pair: kor-fin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### kor-fin\n\n\n* source group: Korean\n* target group: Finnish\n* OPUS readme: kor-fin\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Latn\n* target language(s): fin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.6, chr-F: 0.502", "### System Info:\n\n\n* hf\\_name: kor-fin\n* source\\_languages: kor\n* target\\_languages: fin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'fi']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'fin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: fin\n* short\\_pair: ko-fi\n* chrF2\\_score: 0.502\n* bleu: 26.6\n* brevity\\_penalty: 0.892\n* ref\\_len: 2251.0\n* src\\_name: Korean\n* tgt\\_name: Finnish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: fi\n* prefer\\_old: False\n* long\\_pair: kor-fin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 146, 423 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### kor-fin\n\n\n* source group: Korean\n* target group: Finnish\n* OPUS readme: kor-fin\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Latn\n* target language(s): fin\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.6, chr-F: 0.502### System Info:\n\n\n* hf\\_name: kor-fin\n* source\\_languages: kor\n* target\\_languages: fin\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'fi']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'fin'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: fin\n* short\\_pair: ko-fi\n* chrF2\\_score: 0.502\n* bleu: 26.6\n* brevity\\_penalty: 0.892\n* ref\\_len: 2251.0\n* src\\_name: Korean\n* tgt\\_name: Finnish\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: fi\n* prefer\\_old: False\n* long\\_pair: kor-fin\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### kor-fra * source group: Korean * target group: French * OPUS readme: [kor-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-fra/README.md) * model: transformer-align * source language(s): kor kor_Hang kor_Hani kor_Latn * target language(s): fra * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fra/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fra/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fra/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kor.fra | 30.4 | 0.503 | ### System Info: - hf_name: kor-fra - source_languages: kor - target_languages: fra - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-fra/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ko', 'fr'] - src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'} - tgt_constituents: {'fra'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fra/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-fra/opus-2020-06-17.test.txt - src_alpha3: kor - tgt_alpha3: fra - short_pair: ko-fr - chrF2_score: 0.503 - bleu: 30.4 - brevity_penalty: 0.9179999999999999 - ref_len: 2714.0 - src_name: Korean - tgt_name: French - train_date: 2020-06-17 - src_alpha2: ko - tgt_alpha2: fr - prefer_old: False - long_pair: kor-fra - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ko", "fr"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ko-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ko", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ko", "fr" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ko #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### kor-fra * source group: Korean * target group: French * OPUS readme: kor-fra * model: transformer-align * source language(s): kor kor\_Hang kor\_Hani kor\_Latn * target language(s): fra * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 30.4, chr-F: 0.503 ### System Info: * hf\_name: kor-fra * source\_languages: kor * target\_languages: fra * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ko', 'fr'] * src\_constituents: {'kor\_Hani', 'kor\_Hang', 'kor\_Latn', 'kor'} * tgt\_constituents: {'fra'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: kor * tgt\_alpha3: fra * short\_pair: ko-fr * chrF2\_score: 0.503 * bleu: 30.4 * brevity\_penalty: 0.9179999999999999 * ref\_len: 2714.0 * src\_name: Korean * tgt\_name: French * train\_date: 2020-06-17 * src\_alpha2: ko * tgt\_alpha2: fr * prefer\_old: False * long\_pair: kor-fra * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### kor-fra\n\n\n* source group: Korean\n* target group: French\n* OPUS readme: kor-fra\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Hani kor\\_Latn\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.503", "### System Info:\n\n\n* hf\\_name: kor-fra\n* source\\_languages: kor\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'fr']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: fra\n* short\\_pair: ko-fr\n* chrF2\\_score: 0.503\n* bleu: 30.4\n* brevity\\_penalty: 0.9179999999999999\n* ref\\_len: 2714.0\n* src\\_name: Korean\n* tgt\\_name: French\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: kor-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### kor-fra\n\n\n* source group: Korean\n* target group: French\n* OPUS readme: kor-fra\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Hani kor\\_Latn\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.503", "### System Info:\n\n\n* hf\\_name: kor-fra\n* source\\_languages: kor\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'fr']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: fra\n* short\\_pair: ko-fr\n* chrF2\\_score: 0.503\n* bleu: 30.4\n* brevity\\_penalty: 0.9179999999999999\n* ref\\_len: 2714.0\n* src\\_name: Korean\n* tgt\\_name: French\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: kor-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 152, 436 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### kor-fra\n\n\n* source group: Korean\n* target group: French\n* OPUS readme: kor-fra\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Hani kor\\_Latn\n* target language(s): fra\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.4, chr-F: 0.503### System Info:\n\n\n* hf\\_name: kor-fra\n* source\\_languages: kor\n* target\\_languages: fra\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'fr']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'fra'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: fra\n* short\\_pair: ko-fr\n* chrF2\\_score: 0.503\n* bleu: 30.4\n* brevity\\_penalty: 0.9179999999999999\n* ref\\_len: 2714.0\n* src\\_name: Korean\n* tgt\\_name: French\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: fr\n* prefer\\_old: False\n* long\\_pair: kor-fra\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### kor-hun * source group: Korean * target group: Hungarian * OPUS readme: [kor-hun](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-hun/README.md) * model: transformer-align * source language(s): kor kor_Hang kor_Latn * target language(s): hun * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-hun/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-hun/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-hun/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kor.hun | 28.6 | 0.520 | ### System Info: - hf_name: kor-hun - source_languages: kor - target_languages: hun - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-hun/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ko', 'hu'] - src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'} - tgt_constituents: {'hun'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-hun/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-hun/opus-2020-06-17.test.txt - src_alpha3: kor - tgt_alpha3: hun - short_pair: ko-hu - chrF2_score: 0.52 - bleu: 28.6 - brevity_penalty: 0.905 - ref_len: 1615.0 - src_name: Korean - tgt_name: Hungarian - train_date: 2020-06-17 - src_alpha2: ko - tgt_alpha2: hu - prefer_old: False - long_pair: kor-hun - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ko", "hu"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ko-hu
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ko", "hu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ko", "hu" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ko #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### kor-hun * source group: Korean * target group: Hungarian * OPUS readme: kor-hun * model: transformer-align * source language(s): kor kor\_Hang kor\_Latn * target language(s): hun * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 28.6, chr-F: 0.520 ### System Info: * hf\_name: kor-hun * source\_languages: kor * target\_languages: hun * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ko', 'hu'] * src\_constituents: {'kor\_Hani', 'kor\_Hang', 'kor\_Latn', 'kor'} * tgt\_constituents: {'hun'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: kor * tgt\_alpha3: hun * short\_pair: ko-hu * chrF2\_score: 0.52 * bleu: 28.6 * brevity\_penalty: 0.905 * ref\_len: 1615.0 * src\_name: Korean * tgt\_name: Hungarian * train\_date: 2020-06-17 * src\_alpha2: ko * tgt\_alpha2: hu * prefer\_old: False * long\_pair: kor-hun * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### kor-hun\n\n\n* source group: Korean\n* target group: Hungarian\n* OPUS readme: kor-hun\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Latn\n* target language(s): hun\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.520", "### System Info:\n\n\n* hf\\_name: kor-hun\n* source\\_languages: kor\n* target\\_languages: hun\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'hu']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'hun'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: hun\n* short\\_pair: ko-hu\n* chrF2\\_score: 0.52\n* bleu: 28.6\n* brevity\\_penalty: 0.905\n* ref\\_len: 1615.0\n* src\\_name: Korean\n* tgt\\_name: Hungarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: hu\n* prefer\\_old: False\n* long\\_pair: kor-hun\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### kor-hun\n\n\n* source group: Korean\n* target group: Hungarian\n* OPUS readme: kor-hun\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Latn\n* target language(s): hun\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.520", "### System Info:\n\n\n* hf\\_name: kor-hun\n* source\\_languages: kor\n* target\\_languages: hun\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'hu']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'hun'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: hun\n* short\\_pair: ko-hu\n* chrF2\\_score: 0.52\n* bleu: 28.6\n* brevity\\_penalty: 0.905\n* ref\\_len: 1615.0\n* src\\_name: Korean\n* tgt\\_name: Hungarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: hu\n* prefer\\_old: False\n* long\\_pair: kor-hun\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 148, 427 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #hu #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### kor-hun\n\n\n* source group: Korean\n* target group: Hungarian\n* OPUS readme: kor-hun\n* model: transformer-align\n* source language(s): kor kor\\_Hang kor\\_Latn\n* target language(s): hun\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.6, chr-F: 0.520### System Info:\n\n\n* hf\\_name: kor-hun\n* source\\_languages: kor\n* target\\_languages: hun\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'hu']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'hun'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: hun\n* short\\_pair: ko-hu\n* chrF2\\_score: 0.52\n* bleu: 28.6\n* brevity\\_penalty: 0.905\n* ref\\_len: 1615.0\n* src\\_name: Korean\n* tgt\\_name: Hungarian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: hu\n* prefer\\_old: False\n* long\\_pair: kor-hun\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### kor-rus * source group: Korean * target group: Russian * OPUS readme: [kor-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-rus/README.md) * model: transformer-align * source language(s): kor_Hang kor_Latn * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/kor-rus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.kor.rus | 30.3 | 0.514 | ### System Info: - hf_name: kor-rus - source_languages: kor - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/kor-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ko', 'ru'] - src_constituents: {'kor_Hani', 'kor_Hang', 'kor_Latn', 'kor'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/kor-rus/opus-2020-06-17.test.txt - src_alpha3: kor - tgt_alpha3: rus - short_pair: ko-ru - chrF2_score: 0.514 - bleu: 30.3 - brevity_penalty: 0.961 - ref_len: 1382.0 - src_name: Korean - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: ko - tgt_alpha2: ru - prefer_old: False - long_pair: kor-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["ko", "ru"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ko-ru
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ko", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ko", "ru" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ko #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### kor-rus * source group: Korean * target group: Russian * OPUS readme: kor-rus * model: transformer-align * source language(s): kor\_Hang kor\_Latn * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 30.3, chr-F: 0.514 ### System Info: * hf\_name: kor-rus * source\_languages: kor * target\_languages: rus * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['ko', 'ru'] * src\_constituents: {'kor\_Hani', 'kor\_Hang', 'kor\_Latn', 'kor'} * tgt\_constituents: {'rus'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: kor * tgt\_alpha3: rus * short\_pair: ko-ru * chrF2\_score: 0.514 * bleu: 30.3 * brevity\_penalty: 0.961 * ref\_len: 1382.0 * src\_name: Korean * tgt\_name: Russian * train\_date: 2020-06-17 * src\_alpha2: ko * tgt\_alpha2: ru * prefer\_old: False * long\_pair: kor-rus * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### kor-rus\n\n\n* source group: Korean\n* target group: Russian\n* OPUS readme: kor-rus\n* model: transformer-align\n* source language(s): kor\\_Hang kor\\_Latn\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.3, chr-F: 0.514", "### System Info:\n\n\n* hf\\_name: kor-rus\n* source\\_languages: kor\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'ru']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: rus\n* short\\_pair: ko-ru\n* chrF2\\_score: 0.514\n* bleu: 30.3\n* brevity\\_penalty: 0.961\n* ref\\_len: 1382.0\n* src\\_name: Korean\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: kor-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### kor-rus\n\n\n* source group: Korean\n* target group: Russian\n* OPUS readme: kor-rus\n* model: transformer-align\n* source language(s): kor\\_Hang kor\\_Latn\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.3, chr-F: 0.514", "### System Info:\n\n\n* hf\\_name: kor-rus\n* source\\_languages: kor\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'ru']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: rus\n* short\\_pair: ko-ru\n* chrF2\\_score: 0.514\n* bleu: 30.3\n* brevity\\_penalty: 0.961\n* ref\\_len: 1382.0\n* src\\_name: Korean\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: kor-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 144, 423 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### kor-rus\n\n\n* source group: Korean\n* target group: Russian\n* OPUS readme: kor-rus\n* model: transformer-align\n* source language(s): kor\\_Hang kor\\_Latn\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.3, chr-F: 0.514### System Info:\n\n\n* hf\\_name: kor-rus\n* source\\_languages: kor\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['ko', 'ru']\n* src\\_constituents: {'kor\\_Hani', 'kor\\_Hang', 'kor\\_Latn', 'kor'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: kor\n* tgt\\_alpha3: rus\n* short\\_pair: ko-ru\n* chrF2\\_score: 0.514\n* bleu: 30.3\n* brevity\\_penalty: 0.961\n* ref\\_len: 1382.0\n* src\\_name: Korean\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: ko\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: kor-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-ko-sv * source languages: ko * target languages: sv * OPUS readme: [ko-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ko-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ko-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ko-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ko-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ko.sv | 26.5 | 0.468 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ko-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ko", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ko #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ko-sv * source languages: ko * target languages: sv * OPUS readme: ko-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.5, chr-F: 0.468
[ "### opus-mt-ko-sv\n\n\n* source languages: ko\n* target languages: sv\n* OPUS readme: ko-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.5, chr-F: 0.468" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ko-sv\n\n\n* source languages: ko\n* target languages: sv\n* OPUS readme: ko-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.5, chr-F: 0.468" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ko #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ko-sv\n\n\n* source languages: ko\n* target languages: sv\n* OPUS readme: ko-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.5, chr-F: 0.468" ]
translation
transformers
### opus-mt-kqn-en * source languages: kqn * target languages: en * OPUS readme: [kqn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kqn-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kqn-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kqn.en | 32.6 | 0.480 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kqn-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kqn", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kqn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kqn-en * source languages: kqn * target languages: en * OPUS readme: kqn-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 32.6, chr-F: 0.480
[ "### opus-mt-kqn-en\n\n\n* source languages: kqn\n* target languages: en\n* OPUS readme: kqn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.480" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kqn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kqn-en\n\n\n* source languages: kqn\n* target languages: en\n* OPUS readme: kqn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.480" ]
[ 53, 111 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kqn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kqn-en\n\n\n* source languages: kqn\n* target languages: en\n* OPUS readme: kqn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.480" ]
translation
transformers
### opus-mt-kqn-es * source languages: kqn * target languages: es * OPUS readme: [kqn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kqn-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/kqn-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kqn.es | 20.9 | 0.378 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kqn-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kqn", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kqn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kqn-es * source languages: kqn * target languages: es * OPUS readme: kqn-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.9, chr-F: 0.378
[ "### opus-mt-kqn-es\n\n\n* source languages: kqn\n* target languages: es\n* OPUS readme: kqn-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.9, chr-F: 0.378" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kqn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kqn-es\n\n\n* source languages: kqn\n* target languages: es\n* OPUS readme: kqn-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.9, chr-F: 0.378" ]
[ 53, 112 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kqn #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kqn-es\n\n\n* source languages: kqn\n* target languages: es\n* OPUS readme: kqn-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.9, chr-F: 0.378" ]
translation
transformers
### opus-mt-kqn-fr * source languages: kqn * target languages: fr * OPUS readme: [kqn-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kqn-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kqn-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kqn.fr | 23.2 | 0.400 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kqn-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kqn", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kqn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kqn-fr * source languages: kqn * target languages: fr * OPUS readme: kqn-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.2, chr-F: 0.400
[ "### opus-mt-kqn-fr\n\n\n* source languages: kqn\n* target languages: fr\n* OPUS readme: kqn-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.400" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kqn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kqn-fr\n\n\n* source languages: kqn\n* target languages: fr\n* OPUS readme: kqn-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.400" ]
[ 53, 111 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kqn #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kqn-fr\n\n\n* source languages: kqn\n* target languages: fr\n* OPUS readme: kqn-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.2, chr-F: 0.400" ]
translation
transformers
### opus-mt-kqn-sv * source languages: kqn * target languages: sv * OPUS readme: [kqn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kqn-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kqn-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kqn.sv | 23.3 | 0.409 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kqn-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kqn", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kqn #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kqn-sv * source languages: kqn * target languages: sv * OPUS readme: kqn-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.3, chr-F: 0.409
[ "### opus-mt-kqn-sv\n\n\n* source languages: kqn\n* target languages: sv\n* OPUS readme: kqn-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.409" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kqn #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kqn-sv\n\n\n* source languages: kqn\n* target languages: sv\n* OPUS readme: kqn-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.409" ]
[ 53, 112 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kqn #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kqn-sv\n\n\n* source languages: kqn\n* target languages: sv\n* OPUS readme: kqn-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.409" ]
translation
transformers
### opus-mt-kwn-en * source languages: kwn * target languages: en * OPUS readme: [kwn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kwn-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kwn-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwn-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwn-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kwn.en | 27.5 | 0.434 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kwn-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kwn", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kwn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kwn-en * source languages: kwn * target languages: en * OPUS readme: kwn-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 27.5, chr-F: 0.434
[ "### opus-mt-kwn-en\n\n\n* source languages: kwn\n* target languages: en\n* OPUS readme: kwn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.434" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kwn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kwn-en\n\n\n* source languages: kwn\n* target languages: en\n* OPUS readme: kwn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.434" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kwn #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kwn-en\n\n\n* source languages: kwn\n* target languages: en\n* OPUS readme: kwn-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 27.5, chr-F: 0.434" ]
translation
transformers
### opus-mt-kwy-en * source languages: kwy * target languages: en * OPUS readme: [kwy-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kwy-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kwy-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwy-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwy-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kwy.en | 31.6 | 0.466 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kwy-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kwy", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kwy #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kwy-en * source languages: kwy * target languages: en * OPUS readme: kwy-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 31.6, chr-F: 0.466
[ "### opus-mt-kwy-en\n\n\n* source languages: kwy\n* target languages: en\n* OPUS readme: kwy-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.466" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kwy #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kwy-en\n\n\n* source languages: kwy\n* target languages: en\n* OPUS readme: kwy-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.466" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kwy #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kwy-en\n\n\n* source languages: kwy\n* target languages: en\n* OPUS readme: kwy-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.6, chr-F: 0.466" ]
translation
transformers
### opus-mt-kwy-fr * source languages: kwy * target languages: fr * OPUS readme: [kwy-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kwy-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kwy-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwy-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwy-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kwy.fr | 20.6 | 0.367 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kwy-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kwy", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kwy #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kwy-fr * source languages: kwy * target languages: fr * OPUS readme: kwy-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.6, chr-F: 0.367
[ "### opus-mt-kwy-fr\n\n\n* source languages: kwy\n* target languages: fr\n* OPUS readme: kwy-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.6, chr-F: 0.367" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kwy #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kwy-fr\n\n\n* source languages: kwy\n* target languages: fr\n* OPUS readme: kwy-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.6, chr-F: 0.367" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kwy #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kwy-fr\n\n\n* source languages: kwy\n* target languages: fr\n* OPUS readme: kwy-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.6, chr-F: 0.367" ]
translation
transformers
### opus-mt-kwy-sv * source languages: kwy * target languages: sv * OPUS readme: [kwy-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kwy-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kwy-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwy-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kwy-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.kwy.sv | 20.2 | 0.373 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-kwy-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "kwy", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #kwy #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-kwy-sv * source languages: kwy * target languages: sv * OPUS readme: kwy-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 20.2, chr-F: 0.373
[ "### opus-mt-kwy-sv\n\n\n* source languages: kwy\n* target languages: sv\n* OPUS readme: kwy-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.373" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kwy #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-kwy-sv\n\n\n* source languages: kwy\n* target languages: sv\n* OPUS readme: kwy-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.373" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #kwy #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-kwy-sv\n\n\n* source languages: kwy\n* target languages: sv\n* OPUS readme: kwy-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 20.2, chr-F: 0.373" ]
translation
transformers
### opus-mt-lg-en * source languages: lg * target languages: en * OPUS readme: [lg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lg.en | 32.6 | 0.480 | | Tatoeba.lg.en | 5.4 | 0.243 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lg-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lg", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lg-en * source languages: lg * target languages: en * OPUS readme: lg-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 32.6, chr-F: 0.480 testset: URL, BLEU: 5.4, chr-F: 0.243
[ "### opus-mt-lg-en\n\n\n* source languages: lg\n* target languages: en\n* OPUS readme: lg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.480\ntestset: URL, BLEU: 5.4, chr-F: 0.243" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lg-en\n\n\n* source languages: lg\n* target languages: en\n* OPUS readme: lg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.480\ntestset: URL, BLEU: 5.4, chr-F: 0.243" ]
[ 52, 130 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lg #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lg-en\n\n\n* source languages: lg\n* target languages: en\n* OPUS readme: lg-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 32.6, chr-F: 0.480\ntestset: URL, BLEU: 5.4, chr-F: 0.243" ]
translation
transformers
### opus-mt-lg-es * source languages: lg * target languages: es * OPUS readme: [lg-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lg.es | 22.1 | 0.393 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lg-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lg", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lg-es * source languages: lg * target languages: es * OPUS readme: lg-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.1, chr-F: 0.393
[ "### opus-mt-lg-es\n\n\n* source languages: lg\n* target languages: es\n* OPUS readme: lg-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.393" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lg-es\n\n\n* source languages: lg\n* target languages: es\n* OPUS readme: lg-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.393" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lg #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lg-es\n\n\n* source languages: lg\n* target languages: es\n* OPUS readme: lg-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.393" ]
translation
transformers
### opus-mt-lg-fi * source languages: lg * target languages: fi * OPUS readme: [lg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-fi/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-fi/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-fi/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lg.fi | 21.8 | 0.424 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lg-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lg", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lg #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lg-fi * source languages: lg * target languages: fi * OPUS readme: lg-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 21.8, chr-F: 0.424
[ "### opus-mt-lg-fi\n\n\n* source languages: lg\n* target languages: fi\n* OPUS readme: lg-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.424" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lg #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lg-fi\n\n\n* source languages: lg\n* target languages: fi\n* OPUS readme: lg-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.424" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lg #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lg-fi\n\n\n* source languages: lg\n* target languages: fi\n* OPUS readme: lg-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.8, chr-F: 0.424" ]
translation
transformers
### opus-mt-lg-fr * source languages: lg * target languages: fr * OPUS readme: [lg-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lg.fr | 23.7 | 0.406 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lg-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lg", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lg #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lg-fr * source languages: lg * target languages: fr * OPUS readme: lg-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.7, chr-F: 0.406
[ "### opus-mt-lg-fr\n\n\n* source languages: lg\n* target languages: fr\n* OPUS readme: lg-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.406" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lg #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lg-fr\n\n\n* source languages: lg\n* target languages: fr\n* OPUS readme: lg-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.406" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lg #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lg-fr\n\n\n* source languages: lg\n* target languages: fr\n* OPUS readme: lg-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.406" ]
translation
transformers
### opus-mt-lg-sv * source languages: lg * target languages: sv * OPUS readme: [lg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lg-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lg-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lg.sv | 24.5 | 0.423 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lg-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lg", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lg #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lg-sv * source languages: lg * target languages: sv * OPUS readme: lg-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.5, chr-F: 0.423
[ "### opus-mt-lg-sv\n\n\n* source languages: lg\n* target languages: sv\n* OPUS readme: lg-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.5, chr-F: 0.423" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lg #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lg-sv\n\n\n* source languages: lg\n* target languages: sv\n* OPUS readme: lg-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.5, chr-F: 0.423" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lg #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lg-sv\n\n\n* source languages: lg\n* target languages: sv\n* OPUS readme: lg-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.5, chr-F: 0.423" ]
translation
transformers
### opus-mt-ln-de * source languages: ln * target languages: de * OPUS readme: [ln-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ln-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/ln-de/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-de/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-de/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ln.de | 23.3 | 0.428 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ln-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ln", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ln #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ln-de * source languages: ln * target languages: de * OPUS readme: ln-de * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.3, chr-F: 0.428
[ "### opus-mt-ln-de\n\n\n* source languages: ln\n* target languages: de\n* OPUS readme: ln-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.428" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ln #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ln-de\n\n\n* source languages: ln\n* target languages: de\n* OPUS readme: ln-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.428" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ln #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ln-de\n\n\n* source languages: ln\n* target languages: de\n* OPUS readme: ln-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.3, chr-F: 0.428" ]
translation
transformers
### opus-mt-ln-en * source languages: ln * target languages: en * OPUS readme: [ln-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ln-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ln-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ln.en | 35.9 | 0.516 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ln-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ln", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ln #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ln-en * source languages: ln * target languages: en * OPUS readme: ln-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 35.9, chr-F: 0.516
[ "### opus-mt-ln-en\n\n\n* source languages: ln\n* target languages: en\n* OPUS readme: ln-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.9, chr-F: 0.516" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ln #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ln-en\n\n\n* source languages: ln\n* target languages: en\n* OPUS readme: ln-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.9, chr-F: 0.516" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ln #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ln-en\n\n\n* source languages: ln\n* target languages: en\n* OPUS readme: ln-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.9, chr-F: 0.516" ]
translation
transformers
### opus-mt-ln-es * source languages: ln * target languages: es * OPUS readme: [ln-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ln-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ln-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ln.es | 26.5 | 0.444 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ln-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ln", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ln #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ln-es * source languages: ln * target languages: es * OPUS readme: ln-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.5, chr-F: 0.444
[ "### opus-mt-ln-es\n\n\n* source languages: ln\n* target languages: es\n* OPUS readme: ln-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.5, chr-F: 0.444" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ln #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ln-es\n\n\n* source languages: ln\n* target languages: es\n* OPUS readme: ln-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.5, chr-F: 0.444" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ln #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ln-es\n\n\n* source languages: ln\n* target languages: es\n* OPUS readme: ln-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.5, chr-F: 0.444" ]
translation
transformers
### opus-mt-ln-fr * source languages: ln * target languages: fr * OPUS readme: [ln-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ln-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/ln-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ln-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ln.fr | 28.4 | 0.456 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-ln-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ln", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #ln #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-ln-fr * source languages: ln * target languages: fr * OPUS readme: ln-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 28.4, chr-F: 0.456
[ "### opus-mt-ln-fr\n\n\n* source languages: ln\n* target languages: fr\n* OPUS readme: ln-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.456" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ln #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-ln-fr\n\n\n* source languages: ln\n* target languages: fr\n* OPUS readme: ln-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.456" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #ln #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-ln-fr\n\n\n* source languages: ln\n* target languages: fr\n* OPUS readme: ln-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.456" ]
translation
transformers
### opus-mt-loz-de * source languages: loz * target languages: de * OPUS readme: [loz-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/loz-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/loz-de/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-de/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-de/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.loz.de | 24.3 | 0.438 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-loz-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "loz", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #loz #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-loz-de * source languages: loz * target languages: de * OPUS readme: loz-de * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.3, chr-F: 0.438
[ "### opus-mt-loz-de\n\n\n* source languages: loz\n* target languages: de\n* OPUS readme: loz-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.3, chr-F: 0.438" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #loz #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-loz-de\n\n\n* source languages: loz\n* target languages: de\n* OPUS readme: loz-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.3, chr-F: 0.438" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #loz #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-loz-de\n\n\n* source languages: loz\n* target languages: de\n* OPUS readme: loz-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.3, chr-F: 0.438" ]
translation
transformers
### opus-mt-loz-en * source languages: loz * target languages: en * OPUS readme: [loz-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/loz-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/loz-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.loz.en | 42.1 | 0.565 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-loz-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "loz", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #loz #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-loz-en * source languages: loz * target languages: en * OPUS readme: loz-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 42.1, chr-F: 0.565
[ "### opus-mt-loz-en\n\n\n* source languages: loz\n* target languages: en\n* OPUS readme: loz-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.1, chr-F: 0.565" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #loz #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-loz-en\n\n\n* source languages: loz\n* target languages: en\n* OPUS readme: loz-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.1, chr-F: 0.565" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #loz #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-loz-en\n\n\n* source languages: loz\n* target languages: en\n* OPUS readme: loz-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.1, chr-F: 0.565" ]
translation
transformers
### opus-mt-loz-es * source languages: loz * target languages: es * OPUS readme: [loz-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/loz-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/loz-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.loz.es | 28.4 | 0.464 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-loz-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "loz", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #loz #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-loz-es * source languages: loz * target languages: es * OPUS readme: loz-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 28.4, chr-F: 0.464
[ "### opus-mt-loz-es\n\n\n* source languages: loz\n* target languages: es\n* OPUS readme: loz-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.464" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #loz #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-loz-es\n\n\n* source languages: loz\n* target languages: es\n* OPUS readme: loz-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.464" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #loz #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-loz-es\n\n\n* source languages: loz\n* target languages: es\n* OPUS readme: loz-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.4, chr-F: 0.464" ]
translation
transformers
### opus-mt-loz-fi * source languages: loz * target languages: fi * OPUS readme: [loz-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/loz-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/loz-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-fi/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.loz.fi | 25.1 | 0.467 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-loz-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "loz", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #loz #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-loz-fi * source languages: loz * target languages: fi * OPUS readme: loz-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.1, chr-F: 0.467
[ "### opus-mt-loz-fi\n\n\n* source languages: loz\n* target languages: fi\n* OPUS readme: loz-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.1, chr-F: 0.467" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #loz #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-loz-fi\n\n\n* source languages: loz\n* target languages: fi\n* OPUS readme: loz-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.1, chr-F: 0.467" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #loz #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-loz-fi\n\n\n* source languages: loz\n* target languages: fi\n* OPUS readme: loz-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.1, chr-F: 0.467" ]
translation
transformers
### opus-mt-loz-fr * source languages: loz * target languages: fr * OPUS readme: [loz-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/loz-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/loz-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.loz.fr | 28.5 | 0.462 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-loz-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "loz", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #loz #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-loz-fr * source languages: loz * target languages: fr * OPUS readme: loz-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 28.5, chr-F: 0.462
[ "### opus-mt-loz-fr\n\n\n* source languages: loz\n* target languages: fr\n* OPUS readme: loz-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.5, chr-F: 0.462" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #loz #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-loz-fr\n\n\n* source languages: loz\n* target languages: fr\n* OPUS readme: loz-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.5, chr-F: 0.462" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #loz #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-loz-fr\n\n\n* source languages: loz\n* target languages: fr\n* OPUS readme: loz-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 28.5, chr-F: 0.462" ]
translation
transformers
### opus-mt-loz-sv * source languages: loz * target languages: sv * OPUS readme: [loz-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/loz-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/loz-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/loz-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.loz.sv | 30.0 | 0.477 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-loz-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "loz", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #loz #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-loz-sv * source languages: loz * target languages: sv * OPUS readme: loz-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 30.0, chr-F: 0.477
[ "### opus-mt-loz-sv\n\n\n* source languages: loz\n* target languages: sv\n* OPUS readme: loz-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.0, chr-F: 0.477" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #loz #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-loz-sv\n\n\n* source languages: loz\n* target languages: sv\n* OPUS readme: loz-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.0, chr-F: 0.477" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #loz #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-loz-sv\n\n\n* source languages: loz\n* target languages: sv\n* OPUS readme: loz-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.0, chr-F: 0.477" ]
translation
transformers
### opus-mt-lt-de * source languages: lt * target languages: de * OPUS readme: [lt-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lt-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/lt-de/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lt-de/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lt-de/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.lt.de | 45.2 | 0.640 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lt-de
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lt", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lt #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lt-de * source languages: lt * target languages: de * OPUS readme: lt-de * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 45.2, chr-F: 0.640
[ "### opus-mt-lt-de\n\n\n* source languages: lt\n* target languages: de\n* OPUS readme: lt-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.2, chr-F: 0.640" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lt-de\n\n\n* source languages: lt\n* target languages: de\n* OPUS readme: lt-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.2, chr-F: 0.640" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lt-de\n\n\n* source languages: lt\n* target languages: de\n* OPUS readme: lt-de\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 45.2, chr-F: 0.640" ]
translation
transformers
### lit-epo * source group: Lithuanian * target group: Esperanto * OPUS readme: [lit-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-epo/README.md) * model: transformer-align * source language(s): lit * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.lit.epo | 13.0 | 0.313 | ### System Info: - hf_name: lit-epo - source_languages: lit - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lt', 'eo'] - src_constituents: {'lit'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-epo/opus-2020-06-16.test.txt - src_alpha3: lit - tgt_alpha3: epo - short_pair: lt-eo - chrF2_score: 0.313 - bleu: 13.0 - brevity_penalty: 1.0 - ref_len: 70340.0 - src_name: Lithuanian - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: lt - tgt_alpha2: eo - prefer_old: False - long_pair: lit-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["lt", "eo"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lt-eo
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lt", "eo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lt", "eo" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lt #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### lit-epo * source group: Lithuanian * target group: Esperanto * OPUS readme: lit-epo * model: transformer-align * source language(s): lit * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 13.0, chr-F: 0.313 ### System Info: * hf\_name: lit-epo * source\_languages: lit * target\_languages: epo * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['lt', 'eo'] * src\_constituents: {'lit'} * tgt\_constituents: {'epo'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm4k,spm4k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: lit * tgt\_alpha3: epo * short\_pair: lt-eo * chrF2\_score: 0.313 * bleu: 13.0 * brevity\_penalty: 1.0 * ref\_len: 70340.0 * src\_name: Lithuanian * tgt\_name: Esperanto * train\_date: 2020-06-16 * src\_alpha2: lt * tgt\_alpha2: eo * prefer\_old: False * long\_pair: lit-epo * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### lit-epo\n\n\n* source group: Lithuanian\n* target group: Esperanto\n* OPUS readme: lit-epo\n* model: transformer-align\n* source language(s): lit\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 13.0, chr-F: 0.313", "### System Info:\n\n\n* hf\\_name: lit-epo\n* source\\_languages: lit\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'eo']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: epo\n* short\\_pair: lt-eo\n* chrF2\\_score: 0.313\n* bleu: 13.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 70340.0\n* src\\_name: Lithuanian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: lt\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: lit-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### lit-epo\n\n\n* source group: Lithuanian\n* target group: Esperanto\n* OPUS readme: lit-epo\n* model: transformer-align\n* source language(s): lit\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 13.0, chr-F: 0.313", "### System Info:\n\n\n* hf\\_name: lit-epo\n* source\\_languages: lit\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'eo']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: epo\n* short\\_pair: lt-eo\n* chrF2\\_score: 0.313\n* bleu: 13.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 70340.0\n* src\\_name: Lithuanian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: lt\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: lit-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 52, 135, 400 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #eo #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### lit-epo\n\n\n* source group: Lithuanian\n* target group: Esperanto\n* OPUS readme: lit-epo\n* model: transformer-align\n* source language(s): lit\n* target language(s): epo\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm4k,spm4k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 13.0, chr-F: 0.313### System Info:\n\n\n* hf\\_name: lit-epo\n* source\\_languages: lit\n* target\\_languages: epo\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'eo']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'epo'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm4k,spm4k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: epo\n* short\\_pair: lt-eo\n* chrF2\\_score: 0.313\n* bleu: 13.0\n* brevity\\_penalty: 1.0\n* ref\\_len: 70340.0\n* src\\_name: Lithuanian\n* tgt\\_name: Esperanto\n* train\\_date: 2020-06-16\n* src\\_alpha2: lt\n* tgt\\_alpha2: eo\n* prefer\\_old: False\n* long\\_pair: lit-epo\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### lit-spa * source group: Lithuanian * target group: Spanish * OPUS readme: [lit-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-spa/README.md) * model: transformer-align * source language(s): lit * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-spa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.lit.spa | 50.5 | 0.680 | ### System Info: - hf_name: lit-spa - source_languages: lit - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lt', 'es'] - src_constituents: {'lit'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-spa/opus-2020-06-17.test.txt - src_alpha3: lit - tgt_alpha3: spa - short_pair: lt-es - chrF2_score: 0.68 - bleu: 50.5 - brevity_penalty: 0.963 - ref_len: 2738.0 - src_name: Lithuanian - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: lt - tgt_alpha2: es - prefer_old: False - long_pair: lit-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["lt", "es"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lt-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lt", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lt", "es" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lt #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### lit-spa * source group: Lithuanian * target group: Spanish * OPUS readme: lit-spa * model: transformer-align * source language(s): lit * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 50.5, chr-F: 0.680 ### System Info: * hf\_name: lit-spa * source\_languages: lit * target\_languages: spa * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['lt', 'es'] * src\_constituents: {'lit'} * tgt\_constituents: {'spa'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: lit * tgt\_alpha3: spa * short\_pair: lt-es * chrF2\_score: 0.68 * bleu: 50.5 * brevity\_penalty: 0.963 * ref\_len: 2738.0 * src\_name: Lithuanian * tgt\_name: Spanish * train\_date: 2020-06-17 * src\_alpha2: lt * tgt\_alpha2: es * prefer\_old: False * long\_pair: lit-spa * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### lit-spa\n\n\n* source group: Lithuanian\n* target group: Spanish\n* OPUS readme: lit-spa\n* model: transformer-align\n* source language(s): lit\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.5, chr-F: 0.680", "### System Info:\n\n\n* hf\\_name: lit-spa\n* source\\_languages: lit\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'es']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: spa\n* short\\_pair: lt-es\n* chrF2\\_score: 0.68\n* bleu: 50.5\n* brevity\\_penalty: 0.963\n* ref\\_len: 2738.0\n* src\\_name: Lithuanian\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: lit-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### lit-spa\n\n\n* source group: Lithuanian\n* target group: Spanish\n* OPUS readme: lit-spa\n* model: transformer-align\n* source language(s): lit\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.5, chr-F: 0.680", "### System Info:\n\n\n* hf\\_name: lit-spa\n* source\\_languages: lit\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'es']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: spa\n* short\\_pair: lt-es\n* chrF2\\_score: 0.68\n* bleu: 50.5\n* brevity\\_penalty: 0.963\n* ref\\_len: 2738.0\n* src\\_name: Lithuanian\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: lit-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 130, 390 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### lit-spa\n\n\n* source group: Lithuanian\n* target group: Spanish\n* OPUS readme: lit-spa\n* model: transformer-align\n* source language(s): lit\n* target language(s): spa\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 50.5, chr-F: 0.680### System Info:\n\n\n* hf\\_name: lit-spa\n* source\\_languages: lit\n* target\\_languages: spa\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'es']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'spa'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: spa\n* short\\_pair: lt-es\n* chrF2\\_score: 0.68\n* bleu: 50.5\n* brevity\\_penalty: 0.963\n* ref\\_len: 2738.0\n* src\\_name: Lithuanian\n* tgt\\_name: Spanish\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: es\n* prefer\\_old: False\n* long\\_pair: lit-spa\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-lt-fr * source languages: lt * target languages: fr * OPUS readme: [lt-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lt-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lt-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lt-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lt-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lt.fr | 22.0 | 0.428 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lt-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lt", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lt #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lt-fr * source languages: lt * target languages: fr * OPUS readme: lt-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.0, chr-F: 0.428
[ "### opus-mt-lt-fr\n\n\n* source languages: lt\n* target languages: fr\n* OPUS readme: lt-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.428" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lt-fr\n\n\n* source languages: lt\n* target languages: fr\n* OPUS readme: lt-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.428" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lt-fr\n\n\n* source languages: lt\n* target languages: fr\n* OPUS readme: lt-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.0, chr-F: 0.428" ]
translation
transformers
### lit-ita * source group: Lithuanian * target group: Italian * OPUS readme: [lit-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-ita/README.md) * model: transformer-align * source language(s): lit * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-ita/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-ita/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-ita/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.lit.ita | 42.2 | 0.657 | ### System Info: - hf_name: lit-ita - source_languages: lit - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lt', 'it'] - src_constituents: {'lit'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-ita/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-ita/opus-2020-06-17.test.txt - src_alpha3: lit - tgt_alpha3: ita - short_pair: lt-it - chrF2_score: 0.657 - bleu: 42.2 - brevity_penalty: 0.9740000000000001 - ref_len: 1505.0 - src_name: Lithuanian - tgt_name: Italian - train_date: 2020-06-17 - src_alpha2: lt - tgt_alpha2: it - prefer_old: False - long_pair: lit-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["lt", "it"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lt-it
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lt", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lt", "it" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lt #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### lit-ita * source group: Lithuanian * target group: Italian * OPUS readme: lit-ita * model: transformer-align * source language(s): lit * target language(s): ita * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 42.2, chr-F: 0.657 ### System Info: * hf\_name: lit-ita * source\_languages: lit * target\_languages: ita * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['lt', 'it'] * src\_constituents: {'lit'} * tgt\_constituents: {'ita'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: lit * tgt\_alpha3: ita * short\_pair: lt-it * chrF2\_score: 0.657 * bleu: 42.2 * brevity\_penalty: 0.9740000000000001 * ref\_len: 1505.0 * src\_name: Lithuanian * tgt\_name: Italian * train\_date: 2020-06-17 * src\_alpha2: lt * tgt\_alpha2: it * prefer\_old: False * long\_pair: lit-ita * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### lit-ita\n\n\n* source group: Lithuanian\n* target group: Italian\n* OPUS readme: lit-ita\n* model: transformer-align\n* source language(s): lit\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.2, chr-F: 0.657", "### System Info:\n\n\n* hf\\_name: lit-ita\n* source\\_languages: lit\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'it']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: ita\n* short\\_pair: lt-it\n* chrF2\\_score: 0.657\n* bleu: 42.2\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 1505.0\n* src\\_name: Lithuanian\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: lit-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### lit-ita\n\n\n* source group: Lithuanian\n* target group: Italian\n* OPUS readme: lit-ita\n* model: transformer-align\n* source language(s): lit\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.2, chr-F: 0.657", "### System Info:\n\n\n* hf\\_name: lit-ita\n* source\\_languages: lit\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'it']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: ita\n* short\\_pair: lt-it\n* chrF2\\_score: 0.657\n* bleu: 42.2\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 1505.0\n* src\\_name: Lithuanian\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: lit-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 402 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #it #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### lit-ita\n\n\n* source group: Lithuanian\n* target group: Italian\n* OPUS readme: lit-ita\n* model: transformer-align\n* source language(s): lit\n* target language(s): ita\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 42.2, chr-F: 0.657### System Info:\n\n\n* hf\\_name: lit-ita\n* source\\_languages: lit\n* target\\_languages: ita\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'it']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'ita'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: ita\n* short\\_pair: lt-it\n* chrF2\\_score: 0.657\n* bleu: 42.2\n* brevity\\_penalty: 0.9740000000000001\n* ref\\_len: 1505.0\n* src\\_name: Lithuanian\n* tgt\\_name: Italian\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: it\n* prefer\\_old: False\n* long\\_pair: lit-ita\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### lit-pol * source group: Lithuanian * target group: Polish * OPUS readme: [lit-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-pol/README.md) * model: transformer-align * source language(s): lit * target language(s): pol * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-pol/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-pol/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-pol/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.lit.pol | 53.6 | 0.721 | ### System Info: - hf_name: lit-pol - source_languages: lit - target_languages: pol - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-pol/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lt', 'pl'] - src_constituents: {'lit'} - tgt_constituents: {'pol'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-pol/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-pol/opus-2020-06-17.test.txt - src_alpha3: lit - tgt_alpha3: pol - short_pair: lt-pl - chrF2_score: 0.721 - bleu: 53.6 - brevity_penalty: 0.9570000000000001 - ref_len: 10629.0 - src_name: Lithuanian - tgt_name: Polish - train_date: 2020-06-17 - src_alpha2: lt - tgt_alpha2: pl - prefer_old: False - long_pair: lit-pol - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["lt", "pl"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lt-pl
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lt", "pl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lt", "pl" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lt #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### lit-pol * source group: Lithuanian * target group: Polish * OPUS readme: lit-pol * model: transformer-align * source language(s): lit * target language(s): pol * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 53.6, chr-F: 0.721 ### System Info: * hf\_name: lit-pol * source\_languages: lit * target\_languages: pol * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['lt', 'pl'] * src\_constituents: {'lit'} * tgt\_constituents: {'pol'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: lit * tgt\_alpha3: pol * short\_pair: lt-pl * chrF2\_score: 0.721 * bleu: 53.6 * brevity\_penalty: 0.9570000000000001 * ref\_len: 10629.0 * src\_name: Lithuanian * tgt\_name: Polish * train\_date: 2020-06-17 * src\_alpha2: lt * tgt\_alpha2: pl * prefer\_old: False * long\_pair: lit-pol * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### lit-pol\n\n\n* source group: Lithuanian\n* target group: Polish\n* OPUS readme: lit-pol\n* model: transformer-align\n* source language(s): lit\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.6, chr-F: 0.721", "### System Info:\n\n\n* hf\\_name: lit-pol\n* source\\_languages: lit\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'pl']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: pol\n* short\\_pair: lt-pl\n* chrF2\\_score: 0.721\n* bleu: 53.6\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 10629.0\n* src\\_name: Lithuanian\n* tgt\\_name: Polish\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: lit-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### lit-pol\n\n\n* source group: Lithuanian\n* target group: Polish\n* OPUS readme: lit-pol\n* model: transformer-align\n* source language(s): lit\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.6, chr-F: 0.721", "### System Info:\n\n\n* hf\\_name: lit-pol\n* source\\_languages: lit\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'pl']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: pol\n* short\\_pair: lt-pl\n* chrF2\\_score: 0.721\n* bleu: 53.6\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 10629.0\n* src\\_name: Lithuanian\n* tgt\\_name: Polish\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: lit-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 131, 397 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #pl #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### lit-pol\n\n\n* source group: Lithuanian\n* target group: Polish\n* OPUS readme: lit-pol\n* model: transformer-align\n* source language(s): lit\n* target language(s): pol\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 53.6, chr-F: 0.721### System Info:\n\n\n* hf\\_name: lit-pol\n* source\\_languages: lit\n* target\\_languages: pol\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'pl']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'pol'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: pol\n* short\\_pair: lt-pl\n* chrF2\\_score: 0.721\n* bleu: 53.6\n* brevity\\_penalty: 0.9570000000000001\n* ref\\_len: 10629.0\n* src\\_name: Lithuanian\n* tgt\\_name: Polish\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: pl\n* prefer\\_old: False\n* long\\_pair: lit-pol\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### lit-rus * source group: Lithuanian * target group: Russian * OPUS readme: [lit-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-rus/README.md) * model: transformer-align * source language(s): lit * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.lit.rus | 51.7 | 0.695 | ### System Info: - hf_name: lit-rus - source_languages: lit - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lt', 'ru'] - src_constituents: {'lit'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-rus/opus-2020-06-17.test.txt - src_alpha3: lit - tgt_alpha3: rus - short_pair: lt-ru - chrF2_score: 0.695 - bleu: 51.7 - brevity_penalty: 0.982 - ref_len: 15395.0 - src_name: Lithuanian - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: lt - tgt_alpha2: ru - prefer_old: False - long_pair: lit-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["lt", "ru"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lt-ru
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lt", "ru", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lt", "ru" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lt #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### lit-rus * source group: Lithuanian * target group: Russian * OPUS readme: lit-rus * model: transformer-align * source language(s): lit * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 51.7, chr-F: 0.695 ### System Info: * hf\_name: lit-rus * source\_languages: lit * target\_languages: rus * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['lt', 'ru'] * src\_constituents: {'lit'} * tgt\_constituents: {'rus'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: lit * tgt\_alpha3: rus * short\_pair: lt-ru * chrF2\_score: 0.695 * bleu: 51.7 * brevity\_penalty: 0.982 * ref\_len: 15395.0 * src\_name: Lithuanian * tgt\_name: Russian * train\_date: 2020-06-17 * src\_alpha2: lt * tgt\_alpha2: ru * prefer\_old: False * long\_pair: lit-rus * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### lit-rus\n\n\n* source group: Lithuanian\n* target group: Russian\n* OPUS readme: lit-rus\n* model: transformer-align\n* source language(s): lit\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.7, chr-F: 0.695", "### System Info:\n\n\n* hf\\_name: lit-rus\n* source\\_languages: lit\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'ru']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: rus\n* short\\_pair: lt-ru\n* chrF2\\_score: 0.695\n* bleu: 51.7\n* brevity\\_penalty: 0.982\n* ref\\_len: 15395.0\n* src\\_name: Lithuanian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: lit-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### lit-rus\n\n\n* source group: Lithuanian\n* target group: Russian\n* OPUS readme: lit-rus\n* model: transformer-align\n* source language(s): lit\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.7, chr-F: 0.695", "### System Info:\n\n\n* hf\\_name: lit-rus\n* source\\_languages: lit\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'ru']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: rus\n* short\\_pair: lt-ru\n* chrF2\\_score: 0.695\n* bleu: 51.7\n* brevity\\_penalty: 0.982\n* ref\\_len: 15395.0\n* src\\_name: Lithuanian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: lit-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 131, 392 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #ru #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### lit-rus\n\n\n* source group: Lithuanian\n* target group: Russian\n* OPUS readme: lit-rus\n* model: transformer-align\n* source language(s): lit\n* target language(s): rus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 51.7, chr-F: 0.695### System Info:\n\n\n* hf\\_name: lit-rus\n* source\\_languages: lit\n* target\\_languages: rus\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'ru']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'rus'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: rus\n* short\\_pair: lt-ru\n* chrF2\\_score: 0.695\n* bleu: 51.7\n* brevity\\_penalty: 0.982\n* ref\\_len: 15395.0\n* src\\_name: Lithuanian\n* tgt\\_name: Russian\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: ru\n* prefer\\_old: False\n* long\\_pair: lit-rus\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-lt-sv * source languages: lt * target languages: sv * OPUS readme: [lt-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lt-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lt-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lt-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lt-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lt.sv | 22.9 | 0.447 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lt-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lt", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lt #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lt-sv * source languages: lt * target languages: sv * OPUS readme: lt-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.9, chr-F: 0.447
[ "### opus-mt-lt-sv\n\n\n* source languages: lt\n* target languages: sv\n* OPUS readme: lt-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.9, chr-F: 0.447" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lt-sv\n\n\n* source languages: lt\n* target languages: sv\n* OPUS readme: lt-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.9, chr-F: 0.447" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lt-sv\n\n\n* source languages: lt\n* target languages: sv\n* OPUS readme: lt-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.9, chr-F: 0.447" ]
translation
transformers
### lit-tur * source group: Lithuanian * target group: Turkish * OPUS readme: [lit-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-tur/README.md) * model: transformer-align * source language(s): lit * target language(s): tur * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-tur/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-tur/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/lit-tur/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.lit.tur | 35.8 | 0.648 | ### System Info: - hf_name: lit-tur - source_languages: lit - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/lit-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['lt', 'tr'] - src_constituents: {'lit'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-tur/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/lit-tur/opus-2020-06-17.test.txt - src_alpha3: lit - tgt_alpha3: tur - short_pair: lt-tr - chrF2_score: 0.648 - bleu: 35.8 - brevity_penalty: 0.927 - ref_len: 7700.0 - src_name: Lithuanian - tgt_name: Turkish - train_date: 2020-06-17 - src_alpha2: lt - tgt_alpha2: tr - prefer_old: False - long_pair: lit-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
{"language": ["lt", "tr"], "license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lt-tr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lt", "tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "lt", "tr" ]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lt #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### lit-tur * source group: Lithuanian * target group: Turkish * OPUS readme: lit-tur * model: transformer-align * source language(s): lit * target language(s): tur * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 35.8, chr-F: 0.648 ### System Info: * hf\_name: lit-tur * source\_languages: lit * target\_languages: tur * opus\_readme\_url: URL * original\_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['lt', 'tr'] * src\_constituents: {'lit'} * tgt\_constituents: {'tur'} * src\_multilingual: False * tgt\_multilingual: False * prepro: normalization + SentencePiece (spm32k,spm32k) * url\_model: URL * url\_test\_set: URL * src\_alpha3: lit * tgt\_alpha3: tur * short\_pair: lt-tr * chrF2\_score: 0.648 * bleu: 35.8 * brevity\_penalty: 0.927 * ref\_len: 7700.0 * src\_name: Lithuanian * tgt\_name: Turkish * train\_date: 2020-06-17 * src\_alpha2: lt * tgt\_alpha2: tr * prefer\_old: False * long\_pair: lit-tur * helsinki\_git\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers\_git\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port\_machine: brutasse * port\_time: 2020-08-21-14:41
[ "### lit-tur\n\n\n* source group: Lithuanian\n* target group: Turkish\n* OPUS readme: lit-tur\n* model: transformer-align\n* source language(s): lit\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.8, chr-F: 0.648", "### System Info:\n\n\n* hf\\_name: lit-tur\n* source\\_languages: lit\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'tr']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: tur\n* short\\_pair: lt-tr\n* chrF2\\_score: 0.648\n* bleu: 35.8\n* brevity\\_penalty: 0.927\n* ref\\_len: 7700.0\n* src\\_name: Lithuanian\n* tgt\\_name: Turkish\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: lit-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### lit-tur\n\n\n* source group: Lithuanian\n* target group: Turkish\n* OPUS readme: lit-tur\n* model: transformer-align\n* source language(s): lit\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.8, chr-F: 0.648", "### System Info:\n\n\n* hf\\_name: lit-tur\n* source\\_languages: lit\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'tr']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: tur\n* short\\_pair: lt-tr\n* chrF2\\_score: 0.648\n* bleu: 35.8\n* brevity\\_penalty: 0.927\n* ref\\_len: 7700.0\n* src\\_name: Lithuanian\n* tgt\\_name: Turkish\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: lit-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
[ 51, 134, 396 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lt #tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### lit-tur\n\n\n* source group: Lithuanian\n* target group: Turkish\n* OPUS readme: lit-tur\n* model: transformer-align\n* source language(s): lit\n* target language(s): tur\n* model: transformer-align\n* pre-processing: normalization + SentencePiece (spm32k,spm32k)\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.8, chr-F: 0.648### System Info:\n\n\n* hf\\_name: lit-tur\n* source\\_languages: lit\n* target\\_languages: tur\n* opus\\_readme\\_url: URL\n* original\\_repo: Tatoeba-Challenge\n* tags: ['translation']\n* languages: ['lt', 'tr']\n* src\\_constituents: {'lit'}\n* tgt\\_constituents: {'tur'}\n* src\\_multilingual: False\n* tgt\\_multilingual: False\n* prepro: normalization + SentencePiece (spm32k,spm32k)\n* url\\_model: URL\n* url\\_test\\_set: URL\n* src\\_alpha3: lit\n* tgt\\_alpha3: tur\n* short\\_pair: lt-tr\n* chrF2\\_score: 0.648\n* bleu: 35.8\n* brevity\\_penalty: 0.927\n* ref\\_len: 7700.0\n* src\\_name: Lithuanian\n* tgt\\_name: Turkish\n* train\\_date: 2020-06-17\n* src\\_alpha2: lt\n* tgt\\_alpha2: tr\n* prefer\\_old: False\n* long\\_pair: lit-tur\n* helsinki\\_git\\_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535\n* transformers\\_git\\_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b\n* port\\_machine: brutasse\n* port\\_time: 2020-08-21-14:41" ]
translation
transformers
### opus-mt-lu-en * source languages: lu * target languages: en * OPUS readme: [lu-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lu.en | 35.7 | 0.517 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lu-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lu", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lu #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lu-en * source languages: lu * target languages: en * OPUS readme: lu-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 35.7, chr-F: 0.517
[ "### opus-mt-lu-en\n\n\n* source languages: lu\n* target languages: en\n* OPUS readme: lu-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.7, chr-F: 0.517" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lu #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lu-en\n\n\n* source languages: lu\n* target languages: en\n* OPUS readme: lu-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.7, chr-F: 0.517" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lu #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lu-en\n\n\n* source languages: lu\n* target languages: en\n* OPUS readme: lu-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 35.7, chr-F: 0.517" ]
translation
transformers
### opus-mt-lu-es * source languages: lu * target languages: es * OPUS readme: [lu-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lu.es | 22.4 | 0.400 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lu-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lu", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lu #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lu-es * source languages: lu * target languages: es * OPUS readme: lu-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.4, chr-F: 0.400
[ "### opus-mt-lu-es\n\n\n* source languages: lu\n* target languages: es\n* OPUS readme: lu-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.400" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lu #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lu-es\n\n\n* source languages: lu\n* target languages: es\n* OPUS readme: lu-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.400" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lu #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lu-es\n\n\n* source languages: lu\n* target languages: es\n* OPUS readme: lu-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.4, chr-F: 0.400" ]
translation
transformers
### opus-mt-lu-fi * source languages: lu * target languages: fi * OPUS readme: [lu-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-fi/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lu.fi | 21.4 | 0.442 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lu-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lu", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lu #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lu-fi * source languages: lu * target languages: fi * OPUS readme: lu-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 21.4, chr-F: 0.442
[ "### opus-mt-lu-fi\n\n\n* source languages: lu\n* target languages: fi\n* OPUS readme: lu-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.4, chr-F: 0.442" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lu #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lu-fi\n\n\n* source languages: lu\n* target languages: fi\n* OPUS readme: lu-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.4, chr-F: 0.442" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lu #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lu-fi\n\n\n* source languages: lu\n* target languages: fi\n* OPUS readme: lu-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.4, chr-F: 0.442" ]
translation
transformers
### opus-mt-lu-fr * source languages: lu * target languages: fr * OPUS readme: [lu-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-fr/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-fr/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-fr/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lu.fr | 26.4 | 0.431 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lu-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lu", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lu #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lu-fr * source languages: lu * target languages: fr * OPUS readme: lu-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 26.4, chr-F: 0.431
[ "### opus-mt-lu-fr\n\n\n* source languages: lu\n* target languages: fr\n* OPUS readme: lu-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.431" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lu #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lu-fr\n\n\n* source languages: lu\n* target languages: fr\n* OPUS readme: lu-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.431" ]
[ 51, 106 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lu #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lu-fr\n\n\n* source languages: lu\n* target languages: fr\n* OPUS readme: lu-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 26.4, chr-F: 0.431" ]
translation
transformers
### opus-mt-lu-sv * source languages: lu * target languages: sv * OPUS readme: [lu-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lu.sv | 25.4 | 0.435 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lu-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lu", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lu #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lu-sv * source languages: lu * target languages: sv * OPUS readme: lu-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.4, chr-F: 0.435
[ "### opus-mt-lu-sv\n\n\n* source languages: lu\n* target languages: sv\n* OPUS readme: lu-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.4, chr-F: 0.435" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lu #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lu-sv\n\n\n* source languages: lu\n* target languages: sv\n* OPUS readme: lu-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.4, chr-F: 0.435" ]
[ 51, 105 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lu #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lu-sv\n\n\n* source languages: lu\n* target languages: sv\n* OPUS readme: lu-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.4, chr-F: 0.435" ]
translation
transformers
### opus-mt-lua-en * source languages: lua * target languages: en * OPUS readme: [lua-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lua.en | 34.4 | 0.502 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lua-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lua", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lua #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lua-en * source languages: lua * target languages: en * OPUS readme: lua-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 34.4, chr-F: 0.502
[ "### opus-mt-lua-en\n\n\n* source languages: lua\n* target languages: en\n* OPUS readme: lua-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.4, chr-F: 0.502" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lua #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lua-en\n\n\n* source languages: lua\n* target languages: en\n* OPUS readme: lua-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.4, chr-F: 0.502" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lua #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lua-en\n\n\n* source languages: lua\n* target languages: en\n* OPUS readme: lua-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 34.4, chr-F: 0.502" ]
translation
transformers
### opus-mt-lua-es * source languages: lua * target languages: es * OPUS readme: [lua-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lua.es | 23.1 | 0.409 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lua-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lua", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lua #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lua-es * source languages: lua * target languages: es * OPUS readme: lua-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.1, chr-F: 0.409
[ "### opus-mt-lua-es\n\n\n* source languages: lua\n* target languages: es\n* OPUS readme: lua-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.409" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lua #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lua-es\n\n\n* source languages: lua\n* target languages: es\n* OPUS readme: lua-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.409" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lua #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lua-es\n\n\n* source languages: lua\n* target languages: es\n* OPUS readme: lua-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.1, chr-F: 0.409" ]
translation
transformers
### opus-mt-lua-fi * source languages: lua * target languages: fi * OPUS readme: [lua-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fi/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lua.fi | 23.5 | 0.450 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lua-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lua", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lua #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lua-fi * source languages: lua * target languages: fi * OPUS readme: lua-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.5, chr-F: 0.450
[ "### opus-mt-lua-fi\n\n\n* source languages: lua\n* target languages: fi\n* OPUS readme: lua-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.450" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lua #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lua-fi\n\n\n* source languages: lua\n* target languages: fi\n* OPUS readme: lua-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.450" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lua #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lua-fi\n\n\n* source languages: lua\n* target languages: fi\n* OPUS readme: lua-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.5, chr-F: 0.450" ]
translation
transformers
### opus-mt-lua-fr * source languages: lua * target languages: fr * OPUS readme: [lua-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lua.fr | 25.7 | 0.429 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lua-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lua", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lua #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lua-fr * source languages: lua * target languages: fr * OPUS readme: lua-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.7, chr-F: 0.429
[ "### opus-mt-lua-fr\n\n\n* source languages: lua\n* target languages: fr\n* OPUS readme: lua-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.429" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lua #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lua-fr\n\n\n* source languages: lua\n* target languages: fr\n* OPUS readme: lua-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.429" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lua #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lua-fr\n\n\n* source languages: lua\n* target languages: fr\n* OPUS readme: lua-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.429" ]
translation
transformers
### opus-mt-lua-sv * source languages: lua * target languages: sv * OPUS readme: [lua-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lua.sv | 25.7 | 0.437 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lua-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lua", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lua #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lua-sv * source languages: lua * target languages: sv * OPUS readme: lua-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.7, chr-F: 0.437
[ "### opus-mt-lua-sv\n\n\n* source languages: lua\n* target languages: sv\n* OPUS readme: lua-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.437" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lua #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lua-sv\n\n\n* source languages: lua\n* target languages: sv\n* OPUS readme: lua-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.437" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lua #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lua-sv\n\n\n* source languages: lua\n* target languages: sv\n* OPUS readme: lua-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.7, chr-F: 0.437" ]
translation
transformers
### opus-mt-lue-en * source languages: lue * target languages: en * OPUS readme: [lue-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lue.en | 31.7 | 0.469 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lue-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lue", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lue #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lue-en * source languages: lue * target languages: en * OPUS readme: lue-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 31.7, chr-F: 0.469
[ "### opus-mt-lue-en\n\n\n* source languages: lue\n* target languages: en\n* OPUS readme: lue-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.7, chr-F: 0.469" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lue #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lue-en\n\n\n* source languages: lue\n* target languages: en\n* OPUS readme: lue-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.7, chr-F: 0.469" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lue #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lue-en\n\n\n* source languages: lue\n* target languages: en\n* OPUS readme: lue-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 31.7, chr-F: 0.469" ]
translation
transformers
### opus-mt-lue-es * source languages: lue * target languages: es * OPUS readme: [lue-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lue.es | 22.8 | 0.399 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lue-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lue", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lue #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lue-es * source languages: lue * target languages: es * OPUS readme: lue-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.8, chr-F: 0.399
[ "### opus-mt-lue-es\n\n\n* source languages: lue\n* target languages: es\n* OPUS readme: lue-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.399" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lue #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lue-es\n\n\n* source languages: lue\n* target languages: es\n* OPUS readme: lue-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.399" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lue #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lue-es\n\n\n* source languages: lue\n* target languages: es\n* OPUS readme: lue-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.8, chr-F: 0.399" ]
translation
transformers
### opus-mt-lue-fi * source languages: lue * target languages: fi * OPUS readme: [lue-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-fi/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lue.fi | 22.1 | 0.427 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lue-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lue", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lue #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lue-fi * source languages: lue * target languages: fi * OPUS readme: lue-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.1, chr-F: 0.427
[ "### opus-mt-lue-fi\n\n\n* source languages: lue\n* target languages: fi\n* OPUS readme: lue-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.427" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lue #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lue-fi\n\n\n* source languages: lue\n* target languages: fi\n* OPUS readme: lue-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.427" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lue #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lue-fi\n\n\n* source languages: lue\n* target languages: fi\n* OPUS readme: lue-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.1, chr-F: 0.427" ]
translation
transformers
### opus-mt-lue-fr * source languages: lue * target languages: fr * OPUS readme: [lue-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lue.fr | 24.1 | 0.407 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lue-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lue", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lue #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lue-fr * source languages: lue * target languages: fr * OPUS readme: lue-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 24.1, chr-F: 0.407
[ "### opus-mt-lue-fr\n\n\n* source languages: lue\n* target languages: fr\n* OPUS readme: lue-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.407" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lue #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lue-fr\n\n\n* source languages: lue\n* target languages: fr\n* OPUS readme: lue-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.407" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lue #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lue-fr\n\n\n* source languages: lue\n* target languages: fr\n* OPUS readme: lue-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 24.1, chr-F: 0.407" ]
translation
transformers
### opus-mt-lue-sv * source languages: lue * target languages: sv * OPUS readme: [lue-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lue.sv | 23.7 | 0.412 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lue-sv
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lue", "sv", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lue #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lue-sv * source languages: lue * target languages: sv * OPUS readme: lue-sv * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 23.7, chr-F: 0.412
[ "### opus-mt-lue-sv\n\n\n* source languages: lue\n* target languages: sv\n* OPUS readme: lue-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.412" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lue #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lue-sv\n\n\n* source languages: lue\n* target languages: sv\n* OPUS readme: lue-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.412" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lue #sv #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lue-sv\n\n\n* source languages: lue\n* target languages: sv\n* OPUS readme: lue-sv\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 23.7, chr-F: 0.412" ]
translation
transformers
### opus-mt-lun-en * source languages: lun * target languages: en * OPUS readme: [lun-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lun-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lun-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lun-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lun-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lun.en | 30.6 | 0.466 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lun-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lun", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lun #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lun-en * source languages: lun * target languages: en * OPUS readme: lun-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 30.6, chr-F: 0.466
[ "### opus-mt-lun-en\n\n\n* source languages: lun\n* target languages: en\n* OPUS readme: lun-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.6, chr-F: 0.466" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lun #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lun-en\n\n\n* source languages: lun\n* target languages: en\n* OPUS readme: lun-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.6, chr-F: 0.466" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lun #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lun-en\n\n\n* source languages: lun\n* target languages: en\n* OPUS readme: lun-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 30.6, chr-F: 0.466" ]
translation
transformers
### opus-mt-luo-en * source languages: luo * target languages: en * OPUS readme: [luo-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/luo-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/luo-en/opus-2020-01-21.zip) * test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/luo-en/opus-2020-01-21.test.txt) * test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/luo-en/opus-2020-01-21.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.luo.en | 29.1 | 0.452 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-luo-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "luo", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #luo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-luo-en * source languages: luo * target languages: en * OPUS readme: luo-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 29.1, chr-F: 0.452
[ "### opus-mt-luo-en\n\n\n* source languages: luo\n* target languages: en\n* OPUS readme: luo-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.452" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #luo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-luo-en\n\n\n* source languages: luo\n* target languages: en\n* OPUS readme: luo-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.452" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #luo #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-luo-en\n\n\n* source languages: luo\n* target languages: en\n* OPUS readme: luo-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 29.1, chr-F: 0.452" ]
translation
transformers
### opus-mt-lus-en * source languages: lus * target languages: en * OPUS readme: [lus-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-en/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-en/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-en/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lus.en | 37.0 | 0.534 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lus-en
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lus", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lus #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lus-en * source languages: lus * target languages: en * OPUS readme: lus-en * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 37.0, chr-F: 0.534
[ "### opus-mt-lus-en\n\n\n* source languages: lus\n* target languages: en\n* OPUS readme: lus-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.0, chr-F: 0.534" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lus #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lus-en\n\n\n* source languages: lus\n* target languages: en\n* OPUS readme: lus-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.0, chr-F: 0.534" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lus #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lus-en\n\n\n* source languages: lus\n* target languages: en\n* OPUS readme: lus-en\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 37.0, chr-F: 0.534" ]
translation
transformers
### opus-mt-lus-es * source languages: lus * target languages: es * OPUS readme: [lus-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-es/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-es/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-es/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lus.es | 21.6 | 0.389 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lus-es
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lus", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lus #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lus-es * source languages: lus * target languages: es * OPUS readme: lus-es * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 21.6, chr-F: 0.389
[ "### opus-mt-lus-es\n\n\n* source languages: lus\n* target languages: es\n* OPUS readme: lus-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.6, chr-F: 0.389" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lus #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lus-es\n\n\n* source languages: lus\n* target languages: es\n* OPUS readme: lus-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.6, chr-F: 0.389" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lus #es #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lus-es\n\n\n* source languages: lus\n* target languages: es\n* OPUS readme: lus-es\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 21.6, chr-F: 0.389" ]
translation
transformers
### opus-mt-lus-fi * source languages: lus * target languages: fi * OPUS readme: [lus-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-fi/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fi/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fi/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lus.fi | 22.6 | 0.441 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lus-fi
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lus", "fi", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lus #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lus-fi * source languages: lus * target languages: fi * OPUS readme: lus-fi * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 22.6, chr-F: 0.441
[ "### opus-mt-lus-fi\n\n\n* source languages: lus\n* target languages: fi\n* OPUS readme: lus-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.441" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lus #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lus-fi\n\n\n* source languages: lus\n* target languages: fi\n* OPUS readme: lus-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.441" ]
[ 52, 108 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lus #fi #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lus-fi\n\n\n* source languages: lus\n* target languages: fi\n* OPUS readme: lus-fi\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 22.6, chr-F: 0.441" ]
translation
transformers
### opus-mt-lus-fr * source languages: lus * target languages: fr * OPUS readme: [lus-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lus.fr | 25.5 | 0.423 |
{"license": "apache-2.0", "tags": ["translation"]}
Helsinki-NLP/opus-mt-lus-fr
null
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lus", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #marian #text2text-generation #translation #lus #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
### opus-mt-lus-fr * source languages: lus * target languages: fr * OPUS readme: lus-fr * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: URL * test set translations: URL * test set scores: URL Benchmarks ---------- testset: URL, BLEU: 25.5, chr-F: 0.423
[ "### opus-mt-lus-fr\n\n\n* source languages: lus\n* target languages: fr\n* OPUS readme: lus-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.423" ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lus #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### opus-mt-lus-fr\n\n\n* source languages: lus\n* target languages: fr\n* OPUS readme: lus-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.423" ]
[ 52, 109 ]
[ "TAGS\n#transformers #pytorch #tf #marian #text2text-generation #translation #lus #fr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n### opus-mt-lus-fr\n\n\n* source languages: lus\n* target languages: fr\n* OPUS readme: lus-fr\n* dataset: opus\n* model: transformer-align\n* pre-processing: normalization + SentencePiece\n* download original weights: URL\n* test set translations: URL\n* test set scores: URL\n\n\nBenchmarks\n----------\n\n\ntestset: URL, BLEU: 25.5, chr-F: 0.423" ]